Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Why Facial Recognition Providers Must Take Consumer Privacy Seriously

Consumer privacy has made big headlines in the recent years with the Facebook Cambridge Analytica Scandal, Europe’s GDPR and high-profile breaches by companies like Equifax. It’s clear that the data of millions of consumers is at risk every day, and that companies that wish to handle their data must do so with the highest degree of protection around both security and privacy of that data, especially for companies that build and sell AI-enabled facial recognition solutions.

As CEO of an AI-enabled software company specializing in facial recognition solutions, I’ve made data security and privacy among my top priorities. Our pro-privacy stance goes beyond mere privacy by design engineering methodology. We regularly provide our customers with education and best practices, and we have even reached out to US lawmakers, lobbying for sensible pro-privacy regulations governing the technology we sell.

Read More: Facial Detection Technology: Why No Activation Is Complete Without It

What all AI companies should understand is that clear, sensible and ethical regulation is a good thing. Trouble arises when regulations don’t take emerging technologies like face recognition and AI into account. Case in point, Europe’s GDPR does not account for AI. For example, Machine Learning leverages data to train computer systems to make decisions in real time. This data cannot be explained or accessed after the fact.

Our goal, as an industry, should be regulations that protect privacy without grinding innovation to a halt — legislation like Washington House Bill 1493 that allows for greater consumer privacy without limiting innovation. In the absence of regulation, AI companies should do their best to self-regulate, with the goal of avoiding privacy scandals.

What Happens When Privacy Is Not Taken Seriously

The reason we’ve taken privacy so seriously is that history has proven, time and time again, how crippling privacy scandals can be. As one example, Evernote, once one of the most beloved Silicon Valley startups of 2010, faced a brutal backlash in 2015 when the company allowed employees to snoop through customer data to improve service. MoviePass likewise provoked a scandal when they were discovered to be tracking where users drive after leaving movies. And the Cambridge Analytica scandal resulted in Facebook CEO Mark Zuckerberg testifying before Congress. These are just a few of the major scandals that have resulted from companies skirting their duty to protect consumer privacy.

Read More: A Short Side View of Customer 360

Related Posts
1 of 361

The Importance of User Trust

While all AI companies need to take public privacy concerns seriously, it’s necessary to remember that the public’s notion of privacy is in constant flux. Therefore, privacy doesn’t have to be offered at the expense of innovation. Consumers are typically willing to trade some of their privacy for the convenience and benefits of emerging technologies. Case in point, Google likely knows more about me at this point than many of my closest friends. However, the sheer convenience Google offers has led me, and countless other consumers, to feel at ease using Google’s offerings.

Part of this is because Google allows users to opt into services, such as location tracking, that some may find intrusive. Another part is that Google has established trust with its users in that it hasn’t demonstrated a misuse of private data. As long as the benefit of the user experience exceeds the perceived risks (and actual breaches) of user data, the user trust will remain intact. There is much at stake in handling user data and therefore a great amount of focus on not breaking that trust.

Allowing Consumers to Opt-In

I’ve found that emerging technologies are most voraciously adopted when consumers are given the option to opt-in to the use of those technologies. A noteworthy example is the Apple iPhone X, which broke sales records after launching with a facial recognition system designed to protect user data. Everyone purchasing a new iPhone was aware of this new feature. And thus, by purchasing the phone, they were opting into the technology. The public has also offered their stamp of approval to Delta’s use of facial recognition to remove friction during the boarding process. We’ll continue to see consumer adoption of facial recognition in our daily lives very soon in areas like retail, as well.

Read More: Leveraging the Real ‘A’ in AI to Gain a Business Advantage

As we move further into the 21st century, facial recognition and other AI technologies will continue to grow in prevalence. As long as those companies that provide biometric technologies do their part to take consumer privacy seriously, I believe consumers will continue to offer their stamp of approval.

(This article was first published on our website in Jan 2019)

  1. […] Recommended AI News: Why Facial Recognition Providers Must Take Consumer Privacy Seriously […]

  2. […] Recommended AI News: Why Facial Recognition Providers Must Take Consumer Privacy Seriously […]

  3. […] Recommended AI News: Why Facial Recognition Providers Must Take Consumer Privacy Seriously […]

Leave A Reply

Your email address will not be published.