[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Darktrace Delivers Oversight of Enterprise AI Adoption with Launch of Darktrace / SECURE AI

Darktrace, a global leader in AI for cybersecurity, announced the launch of Darktrace / SECURE AI™, a new behavioral AI security product designed to help enterprises deploy and scale artificial intelligence by understanding how AI systems behave, interact with other systems and humans, and evolve over time. Building on Darktrace’s long heritage in behavioral AI to understand intent and detect deviations, Darktrace / SECURE AI enables organizations to intervene when AI systems act abnormally, drift from intended behavior, exceed authorized access, violate policy, or appear to be manipulated to perform unauthorized actions.

As organizations move rapidly from AI experimentation to production, traditional security controls are proving insufficient for managing dynamic, language-driven systems. With Darktrace / SECURE AI, Darktrace is bringing its proven behavioral AI approach to the challenge. Unlike static guardrails or policy-driven approaches, behavioral AI observes how generative AI and agentic workflows actually operate in the real world. Darktrace / SECURE AI continually analyzes AI interactions across the enterprise, including prompt language and data access patterns to detect emerging risks based on anomalous activity that traditional security tools and static guardrails often miss.

“AI systems don’t fail like traditional software – they drift, adapt and sometimes behave in unexpected ways,” said Mike Beck, Chief Information Security Officer at Darktrace. “Darktrace has taken a behavioral approach to understanding and securing the unstructured and unpredictable ecosystems of people, data and technology within enterprises for more than a decade. With Darktrace / SECURE AI, we’re applying our behavioral approach to give security teams visibility into what AI is doing, not just what it’s allowed to do, and enabling businesses to innovate with confidence.”

Also Read: AiThority Interview with Zohaib Ahmed, co-founder and CEO at Resemble AI

Darktrace / SECURE AI provides CISOs with a practical way to govern AI without stifling adoption. The product integrates with existing security operations and delivers actionable insights to both new standalone and existing Darktrace ActiveAI Security Platform™ customers. The new product is designed for enterprises operating AI across embedded SaaS applications, cloud-hosted models, and autonomous or semi-autonomous agents developed in low and high code development environments. It helps security teams prevent sensitive data exposure, enforce internal access and usage policies, and govern autonomous AI activity across enterprise AI assistants and agents as well as AI development and deployment.

Related Posts
1 of 42,655

“Security has always been about behavior,” said Jack Stockdale, Chief Technology Officer at Darktrace. “As AI becomes agentic, prompts become the behavioral layer, encoding intent, context, and downstream actions. If you can’t observe and understand prompt language at runtime, you can’t detect drift, misuse, or emergent behavior. Securing AI without prompt visibility is like securing email without reading the message body. Prompts are to AI what traffic is to networks and identity is to users.”

AI adoption has become a board-level priority as organizations adopt AI tools at scale to boost productivity, growth, and competitiveness across the enterprise. Across Darktrace’s customer base, more than 70% of organizations are already using generative AI tools1. As adoption matures, many organizations are increasingly deploying AI agents that can log into systems, access data, and take action on behalf of employees. But approved tools are only part of the picture. Among those customers with a dominant generative AI tool in use, 91% also have employees using additional AI services1, which likely represent shadow AI tools, leaving security teams without a clear view of which AI services are in use, where they are deployed, what data is leaving the business and where it is going.

This loss of visibility and data is already translating into real business risk. Over a five-month period, Darktrace observed unusual or anomalous data uploads to generative AI services averaging 75MB per account – equivalent to around 4,700 pages of documents – with some accounts averaging anomalous uploads of over 200,000 pages1. Potentially sensitive data is leaving businesses at scale, entering AI environments where it can be retained, reused, or surfaced beyond organizational control. In the hands of threat actors, a single upload can be weaponized for targeted social engineering, impersonation, IP theft, or AI agent manipulation.

Also Read: The Death of the Questionnaire: Automating RFP Responses with GenAI

[To share your insights with us, please write to psen@itechseries.com]

Comments are closed.