[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

How To Navigate a Growing World of Deepfakes’

By: Phil Tomlinson, SVP, Global Offerings at TaskUs

The greatest threats are often those you ignore or downplay. Deepfakes may be just that kind of threat. 31% of business leaders believe deepfakes have not increased their fraud risk. One in four aren’t familiar with them at all. 

Yet evidence abounds the threat is real. 

One security report found that deepfake fraud increased by 1,740 percent in North America and by 1,530 percent in the Asia-Pacific region in 2022. Another that it increased 3000% from 2022 to 2023.  Voice-based fraud accounts for an estimated $25 billion in annual losses, with almost 37% of US businesses saying they’ve been targeted by audio deepfakes alone. 

Fraud isn’t the only danger. Deepfake content spreads misinformation that can easily damage a company’s reputation and alienate its customers. On social platforms, 39% of users report altering their behavior because of encountering deepfake content. 

And sadly, it seems that by and large businesses aren’t prepared for this onslaught — 80% of companies have not even established the necessary protocols to deal with deepfake attacks. So what’s a company to do? First, know your enemy, then fight fire with fire.

Understanding Deepfakes

In 2021, a TikTok account posted deepfake videos of Tom Cruise performing magic tricks and other benign activities. A special effects artist had used AI to create the videos demonstrating AI’s entertainment potential. But since then the technology has rapidly advanced in sophistication and impact. 

In September 2024, a U.S. senator took a zoom call from what turned out to be a deepfake impersonator of a foreign Ukranian minister. And elections in India, the U.S. and Taiwan were peppered with deepfake videos, robocalls, and impersonations of major political figures.

Deepfakes follow an unfortunate human impulse: take a promising new technology, and figure out how to use it for ill-gotten gain. In the case of deepfakes, tap AI to generate audio and video that impersonates real people, then use it to defraud businesses or misrepresent them. 

The sophisticated results of such AI-driven fakes can be compelling. In February 2024, a finance employee at a multinational firm in Hong Kong transferred $25 million after deepfake fraudsters impersonated the company’s chief during a video call.

While such advanced technology can easily dupe most human viewers or listeners, AI itself turns out to be a formidable ally in combating deepfakes. But a special hybrid approach holds the most promise.

Also Read: AiThority Interview with Nicole Janssen, Co-Founder and Co-CEO of AltaML

How to thwart them — deepfake detection

Properly trained AI models can detect slight anomalies — many too slight for the human eye or ear to pick up — that can flag so-called “synthetic” (AI-generated) content. Perhaps small blurry spots in an image, mismatched skin tones from one frame to another, or odd background noise in a tiny part of an audio track.

Related Posts
1 of 13,630

But the most powerful — and accurate — results come from using a mutil-modal approach, analyzing images, voice and sound, and text. This approach can be very accurate, with the ability to pick out fakes without generating a ton of false positives. Even in real world settings where ambient noise and quality reductions in content can make things more difficult, it performs with a high degree of accuracy.

Of course, deepfake technology, like the AI it depends on, is advancing rapidly, creating the perennial ‘build a better lock-find a new way to pick it’ cycle. And multi-modal detection systems consume lots of computational power, especially if they are working with real-time data, as in the case of video call analysis.

But these systems offer companies a way to detect and avoid fraudulent attacks in critical environments like call centers. They can also ensure a company’s content moderation effectively screens malevolent fakes that could kill its reputation.

Deepfake detection isn’t simple or trivial, but if you keep a few things in mind, you can successfully put it to work for your company.

Steps for success

Keep it always on and current — Deepfake attacks can happen at any time. So however you deploy your detection, it has to have the capacity to remain vigilant at all times. It can never be off.

Also Read: The Growing Role of AI in Identity-Based Attacks in 2024

As deepfake tech gets better, it becomes harder to detect — unless your detection platform and content moderation stay one step ahead.

It’s also crucial that it has the ability to expand to cover languages other than English. While LLMs (and resultant GenAI) have made amazing progress and impact seemingly overnight, most still focus on English. That won’t cut it in today’s content world.

Integrate it with your workflow — Deepfake detection isn’t a standalone filter. It has to work in concert with all your content and all the ways your people — who are at risk of falling prey to fraudulent attacks — work. Could be customer service at a call center, sales and marketing professionals, or finance professionals going about their business.

To be most effective, you have to integrate the detection and moderation with your workflows. This ensures that the protection follows the flow and is there no matter who needs it or when they need it.

Make it a human-AI collaboration — AI arms a company with a powerful weapon against deepfakes, significantly increasing the speed at which content gets monitored. It makes real-time protection of voice and video calls possible. But it is not a 100% automated answer.

We still need human judgment. Complex scenarios, ethical concerns, and discerning cultural context requires human expertise and insight. You need a collaboration between AI and human judgment, and insight into how to design workflows to foster that collaboration.

Ignoring or downplaying deepfakes invites disaster. But tapping the right AI-driven tech and expertise can protect your company, its reputation, and keep your customers and employees shielded from its harmful content.

[To share your insights with us, please write to psen@itechseries.com]

Comments are closed.