Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Adversa AI Red Team Invented Technology for Ethical Hacking of Facial Recognition Systems

Adversa AI, the leading Trusted AI Research startup, has demonstrated a new attack method on AI facial recognition applications. By making imperceptible changes in human faces, it makes an AI-driven facial recognition algorithm misrecognize persons. Compared to other similar approaches, this method is transferable across all AI models and at the same time it’s much more accurate, stealth and resource-efficient.

Adversa AI Red Team has demonstrated a proof-of-concept attack against PimEyes, the most popular and advanced face search engine for public images. It is also similar to Clearview, a commercial facial recognition database sold to law enforcement and governments. PimEyes has been tricked and mistaken Adversa’s CEO for Elon Musk in the photo.

Recommended AI News: Etherlite Is Giving ETL Tokens to Every ETH Wallet Holder; Biggest Airdrop Ever

Uniquely, the attack is a black-box one that was developed without any detailed knowledge of the algorithms used by the search engine, and the exploit is transferable to different facial recognition engines. As the attack allows malefactors to camouflage themselves in a variety of ways, we’ve named it Adversarial Octopus highlighting such qualities of this animal as stealth, precision, and adaptability.

Related Posts
1 of 40,372

The existence of such vulnerabilities in AI applications and facial recognition engines, in particular, may lead to dire consequences and may be used in both poisoning and evasion scenarios, such as the following ones:

  • Hacktivists may wreak havoc in the AI-driven internet platforms that use face properties as input for any decisions or further training. Attackers can poison or evade the algorithms of big Internet companies by manipulating their profile pictures.
  • Cybercriminals can steal personal identities and bypass AI-driven biometric authentication or identity verification systems in banks, trading platforms, or other services that offer authenticated remote assistance. This attack can be even more stealthy in every scenario where traditional deepfakes can be applied.
  • Dissidents may secretly use it to hide their internet activities in social media from law enforcement. It resembles a mask or fake ID for the virtual world we currently live in.

Recommended AI News: Sumo Logic and AWS Collaborate to Transform Security for Multi-Cloud and Hybrid Threat Protection

Recently Adversa AI has released the world-first analytical report concerning a decade of growing activities in the Secure and Trusted AI field. In the wake of interest in practical solutions for ensuring AI system’s security against advanced adversarial attacks, we have developed our own technology for testing facial recognition systems for such attacks. We are looking for early adopters and forward-thinking technology companies to partner with us on implementing adversarial testing capabilities in your SDLC and MLLC capabilities and increase trust in your AI applications and provide customers best-of-breed solutions.

Recommended AI News: NICE Ranks Top of Gartner’s Magic Quadrant in 2021 for Workforce Engagement Management

Comments are closed.