[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

3 Guiding Principles For Managing Fraud In Agentic AI

It seems like agentic artificial intelligence (AI) is everywhere now, with easily accessible tools making it easier than ever for companies to experiment and roll out customized solutions to support complex business tasks. These tools can drive efficiencies, cost savings, improved customer support, and other benefits—but what happens when fraud inevitably enters the picture?

Read More on AiThority: AI Agents: Transformative or Turbulent?

The reality is that not every company is proactively considering or controlling for fraud risks in the agentic tools they are adopting, despite the growing prevalence of fraud in AI. According to one analysis, AI could potentially enable fraud losses of $40 billion in the US by 2027. That’s a 32% compound annual growth rate from $12.3 billion in 2023.

Agentic tools, as popular as they are, are still relatively new, so it might be difficult to understand use cases let alone where fraud can occur without a theoretical example. Imagine an insurance company introduces an agentic tool to automate the claims process for customers. The tool is comprised of a “team” of AI agents that each complete different tasks: collecting claim information, processing claim photos, analyzing claim details against the policy, making claims decisions, and issuing disbursements. The tool works well, but over time different types of fraud occur: the occasional policyholder feeds the tool false claim information to secure larger disbursements; a rogue adjuster working on a commission basis alters algorithm rules to falsely inflate disbursements and earn higher commissions; and at least one fraudster cashes in on automated approvals by flooding the system with fake, routine claims. And that’s just the tip of the iceberg.

Fraud in agentic tools can occur at any stage, with any user, and at any time. Sounds intimidating to control for, but it doesn’t need to be. Here are a few key principles companies can follow to make fraud identification, prevention, and mitigation in agentic tools easier.

“Know Your Agent”

Data is foundational to agentic and other AI-powered tools—so if it gets manipulated, corrupted, or altered in any way, the foundation cracks and fraud can take root. It is therefore critical that organizations take on a “Know Your Agent” (KYA) mindset, a concept like KYC (Know Your Client or Know Your Customer), that focuses on confirming the veracity of agentic models and the data they use.

In practice, KYA means protecting AI tools with robust data governance, including strict data management, access management, and AI model validation policies to ensure the accuracy of AI-generated results. Where is the data coming from? What data is being used to train agentic algorithms and is that data correct? Who has access to the data or model (e.g., is there potential for manipulation)? How well does each agent manage adversarial or bad data? Have alert thresholds been established to help identify suspicious data? Are the model’s results reliable and free from bias, hallucinations, or fraud?

Data and model management doesn’t need to be complicated, but it should be comprehensive. For companies looking to implement more rigor around their models and data, financial services playbooks for KYC and bank model risk management can serve as good frameworks that can be adapted and reimagined with agentic AI in mind.

Assume that fraud will happen—and monitor for it

The highly autonomous nature of agentic tools, paired with a light human touch, makes it a potential minefield for fraudulent activity. The reality is that you can build an agentic tool with fraud prevention in mind, but it is impossible to make it completely fraud proof—and that is because fraud always finds a way in.

Related Posts
1 of 20,126

Fraud risk management is about managing fraud levels as low as possible and, to do that, organizations should also routinely and frequently monitor and audit agentic tools. Anomaly detection systems that leverage behavioral analytics can be used to detect when agentic systems or their human operators are potentially deviating from expected norms. Risks and controls can help regularly review rules-based code, such as algorithm decision trees or alert thresholds, to identify potential, nefarious manipulation. Regular and independent audits can help identify new or troubling patterns or inconsistencies that could point to possible deception.

Organizations that stay alert and proactive on fraud will likely be better positioned to spot; resolve; and prevent similar, future fraud cases than their peers who wait until an incident happens to get serious about fraud.

Responsible AI

A simple rule of thumb in AI is this: if it can think or mimic thinking, it should have a responsible framework. As much as agentic AI is engineered to run nearly autonomously, it is just as important to put safeguards around the people who interact with these systems as it is to safeguard the system itself. This can be done by establishing an responsible AI framework, code of conduct, or other guidelines to govern the safe operation and use of AI.

Approaches like multi-factor access management, single-agent access, training (including training on fraud prevention), industry-wide information sharing, and more can all help minimize fraud, mitigate biases, and promote responsible AI deployments.

What’s next

Fraudsters never rest and neither should companies. Adopting strong guiding principles for managing fraud in agentic AI is just one piece of a much larger and prescriptive fraud risk management strategy—and ideally both are being considered before or during implementation, not as an afterthought.

About the Author Of This Piece

Michael Weil is Managing Director at Deloitte Financial Advisory Services LLP

Also Read: Developing Autonomous Security Agents Using Computer Vision and Generative AI

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.