Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Deepfakes Dominate the Fraud Landscape; Instances Jump by 2137% Over the last 3 Years

42.5% of fraud attempts detected use AI, as estimated by respondents, with 29% of them considered to be successful.

According to the latest research on AI-enabled cyber crimes, one in fifteen fraud attempts now use deepfakes to attack not just B2C firms but increasingly also B2B ones. Signicat published a report identifying the growing threat of AI deepfakes and the current gaps in the organizations. In the next few years, the menace of AI-driven identity frauds will grow. Most organizations are unprepared against these threats, and are yet to implement measures to prevent these risks.

So, when the risks are eminent, shouldn’t business leaders up their ante against the threats, especially when they are likely to face AI-driven attackers?

Signipost found that attackers are using AI to accomplish two things — first, they use AI more effectively in their attacks. Secondly, they use AI to magnify the scale of fraud, and turn them into an industrial phenomena. That’s why, we see so much happening in the world of dark web and ransomware-as-a-service.

Here’s a quick drill-down on the key findings from the report on the deepfake frauds and their AI momentum.

The Rise of Deepfake Frauds will Put you to Shame

Related Posts
1 of 7,399

Around the peak Covid months, a deepfake content would be used to create minor nuisance. Today, deepfakes have emerged as the biggest players in AI-driven identity threat and social engineering attacks. 29% of AI frauds are successful. Most of these attacks have managed to bring down the systems, and demand first, second and even third ransoms.

AI is behind 38% of revenue loss from such frauds which automatically puts the onus on the IT and Security leaders who understand AI’s role in cybersecurity but have failed to build a strong framework or governance against these AI-led attacks.

Time to Upgrade and Upskill

IT and security leaders have identified major areas of vulnerabilities that cause AI-driven identity frauds. However, increasing budgets to upgrade fraud detection and prevention technologies should be separate from the ones required to implement measures. While success rates of AI frauds have more or less stayed steady in the last three years, decision-makers shouldn’t overlook their vulnerabilities. It’s time to upgrade the systems and procedures, and upskill the employees.

Asger Hattel, CEO at Signicat. “AI is only going to get more sophisticated from now on. While our research shows that fraud prevention decision-makers understand the threat, they need the expertise and resources necessary to prevent it from becoming a major threat. A key part of this will be the use of layered AI-enabled fraud prevention tools, to combat these threats with the best technology offers.

So, deepfakes do pose a significant challenge. However, the scarcity of skilled professionals and experts who can handle the advanced resources are bigger challenges.

Comments are closed.