Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Hoxhunt ChatGPT/Cybersecurity Research Reveals: Humans 1, AI 0

Hoxhunt, the leading cybersecurity behavior change software company, released a research report that analyzes the effectiveness of ChatGPT-generated phishing attacks. The study, which analyzed more than 53,000 email users in over 100 countries, compare the win-rate on simulated phishing attacks created by human social engineers and those created by AI large language models. While the potential for ChatGPT to be utilized for malicious phishing activity continues to capture everyone’s imagination, Hoxhunt’s research highlights that human social engineers still outperform AI in terms of inducing clicks on malicious links.

The study revealed that professional red teamers induced a 4.2% click rate, vs. a 2.9% click rate by ChatGPT in the population sample of email users. Humans remained clearly better at hoodwinking other humans, outperforming AI by 69%. The study also revealed that users with more experience in a security awareness and behavior change program displayed significant protection against phishing attacks by both human and AI-generated emails with failure rates dropping from over 14% with less trained users to between 2-4% with experienced users.

AiThority: How AI Can Improve Public Safety

Related Posts
1 of 41,209

“Good security awareness, phishing, and behavior change training works,” said Pyry Åvist, co-founder and CTO of Hoxhunt. “Having training in place that is dynamic enough to keep pace with the constantly-changing attack landscape will continue to protect against data breaches. Users who are actively engaged in training are less likely to click on a simulated phish regardless of its human or robotic origins.”

The research ultimately showcases that AI can be used for good or evil; to both educate and to attack humans. It will therefore create more opportunities both for the attacker and the defender. The human layer is by far the highest attack surface and the greatest source of data breaches, with at least 82% of beaches involving the human element. While large language model-augmented phishing attacks do not yet perform as well as human social engineering, that gap will likely close and AI is already being used by attackers. It’s imperative for security awareness and behavior change training to be dynamic with the evolving threat landscape in order to keep people and organizations safe from attacks.

Read More: The Practical Applications of AI in Workplace

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.