Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

New NINJIO Report Provides Insights on Rapidly Growing Threat of AI-powered Social Engineering

As cybercriminals increasingly use AI to launch more sophisticated and effective attacks, security leaders must adapt with new forms of training

All-In-One Cybersecurity Awareness Training Solution | NINJIO

NINJIO, an industry-leading cybersecurity awareness training company, has released its latest report: “The CISO’s Guide to AI-powered Social Engineering.” With the rapid proliferation of AI applications such as large language models and deepfakes, cybercriminals have never had more tools to deceive and manipulate employees. The report covers how CISOs and other cybersecurity leaders can prepare the workforce for AI-powered phishing attacks, deepfakes, and other new cybercriminal tactics.

Read this trending article: Role Of AI In Cybersecurity: Protecting Digital Assets From Cybercrime

AI has made it easier for cybercriminals to launch advanced social engineering attacks because they don’t need advanced language skills or technical ability. Cybercriminals can produce convincing spear phishing messages at scale, carry out multi-level social engineering attacks with deepfakes, and use AI to conduct surveillance on potential victims. It’s the CISO’s responsibility to ensure that employees are aware of these tactics.

“CISOs can’t afford to be reactive when it comes to AI-powered social engineering,” said Zack Schuler, Founder and Executive Chairman of NINJIO. “The threat is already here, and security leaders must remain one step ahead of ever-shifting cybercriminal tactics. The latest NINJIO report demonstrates how cybersecurity awareness training can adapt to the evolving cyberthreat landscape with real-world examples of AI-powered cyberattacks and individual behavioral interventions that will help employees address psychological vulnerabilities.”

There are three main takeaways from the report:

1.  AI has permanently changed the cyberthreats companies face.

Related Posts
1 of 41,052

AI has reduced or eliminated the barriers to entry for personalized social engineering attacks. For example, phishing was already among the most common and financially destructive cyberattacks, and AI-enabled tools like LLMs and deepfakes will make these attacks even more effective. By enabling cybercriminals to create polished and personalized phishing content — and even follow up on this content with deepfaked “confirmation” communications — AI gives a more threat actors the ability to launch sophisticated cyberattacks that have a much greater chance of success.

Read More: Deloitte Collaborates with HPE and NVIDIA on Generative AI Solutions

2.  Cybersecurity awareness training must adapt to the AI era.

Thanks to AI, it has never been more difficult for employees to distinguish between real and malicious content. Over two-thirds of successful breaches already involve human beings, and AI makes social engineering attacks even harder to detect. CISOs and other security leaders must help employees adapt to these changes by explaining real-world cyberattacks such as deepfaked robocalls and LLM-generated phishing messages. Employees can no longer rely on red flags like misspellings and other errors. They must be capable of identifying coercive language, a sense of urgency, and other signs of psychological manipulation.

3.  CISOs must maximize the impact of cybersecurity awareness training.

While the threat of AI-powered social engineering is intimidating for employees, the right cybersecurity awareness training can empower them to keep their organizations safe. Beyond concrete examples that demonstrate how much damage AI social engineering can cause and how these attacks can be resisted, security leaders must ensure that training is personalized and accountable. By developing unique behavioral profiles for each employee, security leaders can address psychological vulnerabilities and track performance across the organization.

At a time when AI-powered social engineering attacks are surging, an organization-wide focus on preventing these attacks has never been more vital.

Security and AI Capabilities:  The Risks Threatening Employee Data in an AI-Driven World

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.