TikTok is Already Terrified of AI Fraud, So Why Aren’t Businesses?
From alcohol-infused pasta recipes to overly complicated dance routines, Tik Tok has quickly become the place to share new trends with one another. While mostly light-hearted and comedic, users on the platform have recently become obsessed with a much more serious idea: the rise of new fraud methods facilitated by nascent AI technologies. Sadly, this creepy phenomenon is real, and poses a significant challenge to us all.
Whether it’s using AI voice generation platforms to spoof realistic sounding voicemail messages or leveraging deep fake video technologies to create fraudulent FaceTime conversations, the applications of emerging AI technologies across the fraud ecosystem are boundless. Now, for the first time, we’re beginning to see how effective these tools can also be in overcoming the fraud detection systems of businesses.
GROWING WORRIES
Since the start of 2023, the growth of AI audio and video deepfake scams has been clear for all to see. Notable was the story of Jennifer DeStefano in Arizona, a mother who was tricked through AI tools into believing her 15-year-old daughter had been kidnapped and asked to pay a $1 million ransom for her safe return 1. Similar schemes have been reported in other parts of the United States, and now seem to be making their way to Europe.
A recent post by Vice journalist, Joseph Cox, was similarly alarming. Using a relatively simple AI voice replica, the journalist was able to bypass his bank’s voice biometric system, gaining full access to his balances, transactions 2. Clearly, fraudsters have a tool at their disposal that is effective across a wide range of fraud attacks, ranging from the highly sophisticated to the relatively rudimentary.
THREAT TO BUSINESS
Businesses are also beginning to feel the brunt of this issue. According to Regula, 37% of organizations have already experienced a deepfake voice fraud, and 29% have fallen victim to deepfake videos. The same survey also found that fake biometric artefacts like deepfake voice or video are perceived to be real threats by 80% of companies worldwide, as well as by 91% of businesses in the US 3.
However, simply recognizing a threat is one thing, but implementing a mitigative strategy against it is something totally different. Therefore, while it is reassuring to hear that businesses are now beginning to catch up with TikTok teens in how they perceive the growing challenge of new AI fraud methods, it’s difficult to see much evidence of businesses devising solutions that tackle the challenge at hand.
UNDERSTANDING THE CHALLENGE
Getting a grip on this issue won’t be easy, but there are practical steps that companies can take to ensure they’re responding to the challenge. As always, any effective prevention strategy begins with raising awareness and building education around the topic. This new wave of fraud can affect virtually every touch point of a business, so there’s little excuse not to ensure staff at all levels have a working understanding of it.
While everyone can be targeted by this method, in a business context, it will be company decision makers that are more likely to be ‘spoofed’ by fraudsters with the help of AI. Being able to replicate the voice of a company CEO has obvious advantages when trying to defraud unsuspecting employees. The advancement is allowing fraudsters to elevate beyond previous fraud methods, such as CEO email spoofing that operated on similar principles.
What’s more, given many CEOs are very public-facing and often give recorded interviews, which inevitably end up online, fraudsters have more at their disposal to create highly accurate voice models. Ultimately, the more video or audio recordings that fraudsters have to play with, the more they’re able to craft scams that are almost impossible to detect. That’s why companies need to ensure they have fallback measures in place.
Read More: European Data Protection Board (EDPB) Imposes Blockade on Facebook; Levies $1.3 Billion as Penalty
BUILDING DEFENSES
Thankfully, by implementing mitigative protocols and taking proactive actions, companies of all sizes can safeguard themselves from the damaging consequences of these modern fraud methods. As mentioned, raising awareness and education around the subject is an important first step, but the scale of the challenge requires further attention. It’s here where businesses might need to start to get a bit creative.
As strange as it sounds, we might be about to enter an era where code words and phrases have a major resurgence. Creating a designated phrase, or word to quickly authenticate an individual on a call relating to sensitive information can provide a simple extra step to derail fraud attacks. With time, more effective tools may come to market, but for now this is something that companies can rapidly implement to deter a growing threat.
NEW ERA, OLD PRINCIPLES
The applications of emerging AI technologies within the fraud ecosystem are rapidly evolving. From AI voice generation platforms that can mimic realistic voicemail messages to deep fake video technologies, these innovations have already redefined what fraud looks like in 2023. In a business context, C-Suite employees would seem the most vulnerable targets for AI-driven fraudsters who seek to exploit their trustworthiness and authority.
The short-term vulnerabilities created by these technologies could have severe ramifications, jeopardizing a company’s reputation and public image. Thankfully, the previously mentioned simple measures can be taken right now, but action is required immediately. As always, it’s also essential to ensure your company is protected against other forms of fraud attack by investing into a solution like SEON, which leverages AI and Machine Learning technologies back against the fraudsters in a highly effective manner. If not, businesses risk being caught out by increasingly costly fraud attacks, which can affect companies of all shapes and sizes.
Comments are closed.