Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Deepfake Attacks Have Doubled: How to Thrive Despite the Rising Threat

By: Henry Patishman, Executive VP, Identity Verification Solutions, Regula

Over the last two years, the number of businesses that encountered deepfake threats has nearly doubled. What is also doubled is the price companies pay for such encounters – I speak only of the financial losses derived from the damage done by every single deepfake attack on a company. These are probably the main and most important findings of the continuous research on the deepfake threat that we do at Regula. But let’s dive into the nuances.

Also Read: AiThority Interview with Tendü Yogurtçu, CTO at Precisely

The imminent increase in AI-generated fraud

Our 2024 survey data shows an unprecedented  rise in video deepfakes compared to the results obtained in the previous study conducted in 2022. While 29% of fraud decision-makers across Australia, France, Germany, Mexico, Turkey, UAE, UK, and the USA reported encountering video deepfake fraud in 2022, this year’s data – covering the USA, UAE, Mexico, Singapore, and Germany shows this figure has surged to 49%. Audio deepfakes are also on the rise, with a 12% increase compared to 2022 survey data.

This sharp increase in the number of businesses being attacked by deepfakes seems to be rather expected, given that AI tools are developing with rocket speed. AI is definitely a double-edged sword used by everyone, both with good and bad implications. Moreover, AI is becoming both more affordable and more dangerous, as fraudsters are now actively exploiting it. For example, scammers can generate a convincing fake ID using photo or video generators and underground services like OnlyFake. And the cost of such a fake is alarmingly low – around $15.

The price the good guys pay

While fraudsters take advantage of the falling prices for deepfake creation, businesses start suffering an increasing financial burden. Our survey shows that an average loss for 92% of organizations reached $450,000. Moreover, 10% of surveyed businesses reported losses exceeding $1 million, underscoring how severe the problem is.

What is more alarming, the average sum of money businesses lose because of deepfake fraud was around $230,000 two years ago. So, now it’s almost twice as big.

Financial sector: A prime target

Naturally, fraudsters aim at money. So, it’s no surprise that the Financial industry faces graver consequences of deepfake attacks. First and foremost, unlike organizations from other sectors, Financial Services experience more losses: in 2024, such businesses lost over $603,000 with every deepfake attack.

At the same time, if we compare Traditional Banking and Fintech, we’ll see that the former’s losses are slightly lower than those across the industry, reaching $570,000. However, FinTech experienced a much greater financial burden, exceeding $637,000. Probably, this discrepancy may be explained by FinTech’s faster-paced adoption of digital transactions and the sector’s evolving nature, which could expose it to more sophisticated types of fraud.

As if that were not enough, 23% of surveyed organizations in the Financial sector reported losing more than $1,000,000 due to AI-generated fraud. Let me briefly remind you that the global average rate was half as much, only 10%.

Interestingly, Traditional Banking appears to be more susceptible to audio deepfakes: 50% of such organizations dealt precisely with audio deepfakes and 41% – with video. On the contrary, in FinTech video deepfakes tend to prevail: 57% of surveyed companies reported being attacked by video AI-generated fakes and 53% – by audio.

Related Posts
1 of 14,596

Also Read: Balancing Speed and Safety When Implementing a New AI Service

Stand your ground?

The sharp increase in deepfake attack numbers and related losses in just two years highlights the urgency for organizations to strengthen their defenses. While the threat may seem intimidating, there are certain methods that can protect organizations from it rather securely. But before we move on with this, I would like to share one more finding of our research.

56% of surveyed businesses claim they are very confident in their ability to detect deepfakes. Another 42% report that they are somewhat confident. However, only 6% of organizations participating in the survey avoided financial losses from deepfake attacks.

Such a prominent gap between confidence in detecting deepfakes and the reality of financial losses, particularly in the Financial sector, shows that many organizations are really underprepared for the sophistication of these attacks.

The tricks and tips, finally

In the era of AI everything, you may be easily tempted to employ AI to better detect AI-generated threats. To a certain extent, it’s a wise move, since well-trained neural networks are much more capable of distinguishing a deepfake from a real person. However, if you want to be ahead, you should implement more robust AI tools than those used by fraudsters. But the AI race is speedy and endless, and your tools may become outdated even before you finish implementing them.

I advise switching to a new approach, which we at Regula call ‘liveness-centric verification’. This approach is focused on checking physical objects and their dynamic parameters. For documents, these parameters will include optically variable features, such as holograms. For faces, the slightest nuances and movements will make the difference between a real person and an AI-generated fake. With highly sophisticated identity fraud, it’s not secure anymore to rely on checking mere selfies and document scans.

Deepfakes are so often impeccable now that humans and even technologies may not be able to detect them before they cause harm. But there is good news also. The majority of AI-generated deepfakes still lack naturalness. They don’t reflect shadows, their backgrounds may be weird. And it perfectly shows in liveness sessions. Therefore, if you enable a liveness check and request to see a physical object, be it a person’s face or their ID, you get the possibility to examine it more carefully and comprehensively.

The crucial thing in this approach is to ensure that you’re really dealing with a physical object, not a substituted screen.

With all its potential, a liveness check alone will not be enough to fight deepfakes successfully. There should be multiple layers of protection. Several technologies and methods, several approaches. With lifelike deepfakes, you have to dig deeper and probably analyze user behavior to spot any abnormalities. It’s worth paying attention to the device used to access a service, as well as its location, interaction history, and many other factors that can help verify the authenticity of the user.

What is also very important: you should be ready to adapt. AI develops quickly, and fraudsters try to get the most out of it. On the other side, researchers and identity verification solution developers also do their best to fight identity fraud. This race has no predictable outcome, so you’ll have to be ready to change approaches and tactics as the situation evolves.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.