Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

McAfee Debuts Deepfake Audio Detection Tech at CES 2024 to Counter AI-Generated and Disinformation

Known as ‘Project Mockingbird,’ McAfee reveals advanced AI-detection capabilities that can empower consumers to discern what’s real and what’s fake in a time of malicious and misleading AI-generated content

  • New AI-powered innovation from McAfee is over 90% accurate at detecting and exposing maliciously altered audio in videos.

  • With more than two-thirds (70%) of Americans concerned about deepfakes making it hard to trust what they see and hear online, McAfee’s Project Mockingbird will give people a powerful tool to navigate their ever-evolving digital world.

  • AI-powered combination of contextual, behavioral, and categorical detection models lays the foundation for safeguarding online integrity as deepfake-driven cyberbullying, reputation manipulation, and investment scams gain prominence.

McAfee Corp., a global leader in online protection, announced its AI-powered Deepfake Audio Detection technology, known as Project Mockingbird, at the Consumer Electronics Show. This new, proprietary technology was developed to help defend consumers against the surging threat of cybercriminals utilizing fabricated, AI-generated audio to carry out scams that rob people of money and personal information, enable cyberbullying, and manipulate the public image of prominent figures.

AIThority Predictions Series 2024 banner

Read More: CIO Influence Interview with Chris Lubasch, Chief Data Officer & RVP DACH at Snowplow

“With McAfee’s latest AI detection capabilities, we will provide customers a tool that operates at more than 90% accuracy to help people understand their digital world and assess the likelihood of content being different than it seems”

Increasingly sophisticated and accessible Generative AI tools have made it easier for cybercriminals to create highly convincing scams, such as using voice cloning to impersonate a family member in distress, asking for money. Others, often called “cheapfakes,” may involve manipulating authentic videos, like newscasts or celebrity interviews, by splicing in fake audio to change the words coming out of someone’s mouth; this makes it appear that a trusted or known figure has said something different than what was originally said.

Anticipating the ever-growing challenge consumers face in distinguishing real from digitally manipulated content, McAfee Labs, the innovation and threat intelligence arm at McAfee, has developed an industry-leading advanced AI model trained to detect AI-generated audio. McAfee’s Project Mockingbird technology uses a combination of AI-powered contextual, behavioral, and categorical detection models to identify whether the audio in a video is likely AI-generated. With a 90% accuracy rate currently, McAfee can detect and protect against AI content that has been created for malicious “cheapfakes” or deepfakes, providing unmatched protection capabilities to consumers.

“With McAfee’s latest AI detection capabilities, we will provide customers a tool that operates at more than 90% accuracy to help people understand their digital world and assess the likelihood of content being different than it seems,” said Steve Grobman, Chief Technology Officer, McAfee. “So, much like a weather forecast indicating a 70% chance of rain helps you plan your day, our technology equips you with insights to make educated decisions about whether content is what it appears to be.”

“The use cases for this AI detection technology are far-ranging and will prove invaluable to consumers amidst a rise in AI-generated scams and disinformation. With McAfee’s deepfake audio detection capabilities, we’ll be putting the power of knowing what is real or fake directly into the hands of consumers. We’ll help consumers avoid ‘cheapfake’ scams where a cloned celebrity is claiming a new limited-time g*******, and also make sure consumers know instantaneously when watching a video about a presidential candidate, whether it’s real or AI-generated for malicious purposes. This takes protection in the age of AI to a whole new level. We aim to give users the clarity and confidence to navigate the nuances in our new AI-driven world, to protect their online privacy and identity, and well-being,” continued Grobman.

McAfee is building on its rich history of AI innovation, the first public demos of Project Mockingbird, McAfee’s Deepfake Audio Detection technology, will be available onsite at the Consumer Electronics Show 2024. The unveiling of this new AI technology is also further evidence of McAfee’s focus on developing a comprehensive portfolio of AI models that are cross platform and serve multiple uses cases to safeguard consumers’ digital lives.

Read More: CIO Influence Interview with Adam Frank, SVP of Product & Marketing at Armory

Why Project Mockingbird

Related Posts
1 of 41,048

Mockingbirds are a group of birds primarily known for mimicking or “mocking” the songs of other birds. While there’s no proven reason as to why Mockingbirds mock, one theory behind the behavior is that female birds may prefer males who sing more songs, so the males mock to trick them. Similarly, cybercriminals leverage Generative AI to “mock” or clone the voices of celebrities, influencers and even loved ones in order to defraud consumers.

Deep Concerns about Deepfake Technology

Consumers are increasingly concerned about the sophisticated nature of these scams, as they no longer trust that their senses and experiences are enough to determine whether what they’re seeing or hearing is real or fake. Results from McAfee’s December 2023 Deepfakes Survey revealed the following:

Deepfake experiences and perspectives

  • The vast majority (84%) of Americans are concerned about how deepfakes will be used in 2024.
  • 68% of Americans are more concerned about deepfakes now than they were one year ago.
  • Over a third (33%) of Americans said they (16%) or someone they know (17%) have seen or experienced a deepfake scam , this is most prominent for 18 – 34 year olds at 40%.

Top concerns for deepfake usage in 2024:

  • More than half (52%) of Americans are concerned that the rise in deepfakes will influence elections, undermine public trust in the media (48%), and be used to impersonate public figures (49%).
  • Worries around the proliferation of scams thanks to AI and deepfakes is also considerable at 57%.
  • The use of deepfakes for cyberbullying is concerning for 44% of Americans, with more than a third (37%) of people also concerned about deepfakes being used to create sexually explicit content.

For over a decade, McAfee has used AI to safeguard millions of global customers from online privacy and identity threats. By running multiple models in parallel, McAfee can perform a comprehensive analysis of problems from multiple angles. For example, structural models are used to understand the threat types, behavior models to understand what that threat does, and contextual models to trace the origin of the data underpinning a particular threat. Utilizing multiple models concurrently allows McAfee to provide customers with the most effective information and recommendations and reinforces the company’s company’s commitment to protecting people’s privacy, identity, and personal information.

Read More: CIO Influence Interview with Rich Nanda, Principal at Deloitte

[To participate in our interview series, please write to us at sghosh@martechseries.com]

Comments are closed.