[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

AI Agents: Transformative or Turbulent?

By Javvad Malik, Lead Security Awareness Advocate at KnowBe4

Described as revolutionary and disruptive, AI agents are the new cornerstone of innovation in 2025. But as with any technology standing on the cutting edge, this evolution isn’t without its trade-offs. Will this new blend of intelligence and autonomy really introduce a new era of efficiency? Or does the ability for AI Agents to act independently widen the attack surface for cyber threats, making them a potential liability?

Also Read: AI and Big Data Governance: Challenges and Top Benefits

Unlike the generative AI tools that we’ve grown so familiar with in the U.S., AI Agents represent the next frontier of artificial intelligence. While widely known generative AI tools like ChatGPT, Gemini and Grok process user input and generate text based on learned data patterns, agentic AI goes a step further—autonomously making decisions and taking actions to achieve specific outcomes. Think Robocop or I,Robot and you’re not a million miles away. Sounds a bit unnerving, doesn’t it?

Yet, in the hands of organizations, these agents could revolutionize industries, from automating customer interactions to managing vast logistics operations seamlessly. More realistic applications for AI agents include customer service bots, personal assistants, financial advisors, and even self-driving vehicles.

Take an agentic AI personal assistant as an example: this type of agent can leverage data-driven decision-making, machine learning, and logic-based reasoning to book flights, curate and send emails, and even automate complex workflows without human intervention. In fact, research reveals that 58% of workers are already using AI agents daily, with 41% highlighting the automation of tedious tasks as the primary benefit.

The uptake of AI agents across the U.S. is gaining momentum, with sectors like finance, healthcare, and retail increasingly adopting autonomous technologies to streamline operations and enhance customer experiences. A recent survey by the U.S. Chamber of Commerce with Teneo revealed that 98% of small businesses in the U.S. are already using tools that are AI-assisted like search engines or virtual assistants, and 42% are using generative AI tools such as chatbots.

Agentic AI in organizations can offer unparalleled efficiency and streamlining of operations. In doing so, these AI agents can reduce human error and increase productivity.

But, like most revolutionary advancements in technology, this stark shift from reactive assistance to proactive automation comes with its own set of risks. Perhaps the most concerning is how this potentially widens the attack surface for cybercriminals.

Agentic AI could escalate the sophistication, personalization, and scale of social engineering and phishing attacks, particularly through email. Generative AI has already elevated phishing capabilities, enabling targeted and convincing attacks at scale. According to Egress threat intelligence, AI is referenced in 74.8% of phishing toolkits examined, with 82% mentioning deep fakes.

Related Posts
1 of 13,063

Also Read: The Role of AI and Machine Learning in Streaming Technology

Agentic AI takes this threat a step further by introducing an element of automation to these attacks. This could lead to more dynamic, adaptive, and persistent phishing campaigns that can learn from and respond to user behavior in real-time. The autonomous nature of these AI agents could allow attackers to deploy and manage large-scale phishing operations with minimal human intervention, making detection and prevention even more challenging.

Moreover, 63% of cybersecurity leaders express concern about the use of deepfakes in cyberattacks, while 61% worry about cybercriminals leveraging generative AI chatbots to enhance their phishing campaigns. These statistics underscore the gravity of the situation and the need for robust countermeasures.

While these risks should be taken seriously by organizations, it is important to note that they are not entirely new. Rather, in many cases, they are an evolution of existing threats we’ve encountered with previous forms of AI. The key difference lies in the increased autonomy and scalability that agentic AI brings.

Organizations looking to adopt agentic AI should consider the overall risk, and undertake a thorough risk assessment prior to deploying. Such measures can include implementation of strong authentication, including multi-factor authentication while ensuring software is regularly updated and patched.

From a procedural perspective, clear guidelines should be established and ethical frameworks for AI agent operations. Finally, organizations should invest in continuous employee training on AI security awareness.

It is clear that AI agents bring significant benefits and risks. Therefore, a multi-faceted approach combining human expertise with AI-driven intelligent technology is crucial. Particularly in the realm of email security, where social engineering attacks continue to evolve, a blend of comprehensive training and advanced AI-powered detection systems offers the most comprehensive approach.

By leveraging AI’s ability to identify subtle patterns indicative of AI-generated content, alongside human vigilance and critical thinking, organizations can be better equipped to defend against even the most sophisticated phishing attempts.

As we navigate this new era of agentic AI, staying informed, adaptable, and proactive in our security measures will be key to harnessing the benefits while mitigating the risks.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.