New Report from Panorays Shines a Spotlight on AI Vendor Cyber Risks
AI is driving a rise in all kinds of cyber attacks and security breaches. Most of the attention goes to the use of AI by attackers, but a new report puts the spotlight on another source of AI-related risk: AI that’s within the business ecosystem.
A new report from Panorays reveals that CISOs lack visibility into their AI ecosystem and are struggling without the right tools to detect and manage AI vendors and AI-related threats.
The 2026 CISO Survey collected answers from 200 full-time CISOs at mid-large companies in the US, including companies in the financial services, healthcare, tech, professional services, and other industries. It paints a picture of rising pressure on CISOs and their teams due to rapidly growing AI adoption in supply chains, the gaps in their practices, and how they are filling those gaps.
As revealed by the report, AI adoption is surging, but it’s not always managed properly. Although AI feels ubiquitous, embedded into every workflow, industry, interaction, it’s still relatively new. Associated risks are still being uncovered.
“Our findings show that third-party security vulnerabilities aren’t going away – in fact, they’re becoming more prevalent due to a dangerous lack of visibility and the rampant adoption of unmanaged AI tools,” said Matan Or-El, founder and CEO of Panorays. “The rise of AI has only made supply chains more complex, and the connected nature of these data-dependent systems is expanding the attack surface.”
This report gives CISOs data to guide decision-making and strategies around security for AI vendors and tools. It helps reveal consensus around the best ways to manage AI solutions, and supports teams to benchmark their capabilities.
AI Vendors Bring New Risks
It’s not surprising that 60% of participating CISOs consider that AI vendors have unique risks. They frequently process sensitive data through opaque models that have low explainability, and data storage, usage, and sharing policies can be unclear.
The Australian government recently warned businesses about AI risks in the supply chain and released guidelines for mitigating them, as did the UK government, while the US government has advised careful supply chain vetting for AI use.
But the risks of AI run beyond vendors. Any third party can be using AI tools in ways that your company’s security team hasn’t approved or isn’t aware of, which could directly expose sensitive data or reveal vulnerabilities in their defenses. Many companies that lack the resources for robust cybersecurity are enthusiastic AI adopters, and attackers exploit these risky connections as a backdoor into the business ecosystem.
Even internal AI use brings heavy risks, ranging from prompt leakage and illicit sensitive data sharing to regulatory non-compliance and unreliable or hallucinatory outputs.
Shadow AI is running rampant across enterprises. Employees innocently using AI tools without the knowledge of security teams are creating risks that are all the more serious because they go undetected.
Supply Chain Visibility Is Poor, Even Without AI
The state of supply chain visibility adds to the general concern. Only 15% of CISOs have full visibility into their third, fourth, and Nth-party vendors, with most of them focusing protections on direct vendors.
Just 41% of organizations monitor fourth parties for cyber risk, and only 13% track Nth-party vendors. This reflects the strong warnings about supply chain risks that are already circulating in the cybersecurity world.
It seems that the deeper an entity is in the supply chain, the less likely it is to be assessed and tracked effectively, as can be seen from the origin of cyber incidents. Half of them stem from fourth parties, Nth parties, or external entities such as affiliates or partners.
With 60% of CISOs seeing an increase in third-party cyber incidents over the past year, it seems that traditional risk management strategies aren’t able to keep up.
Few CISOs Are Prepared for AI Risks
Even though AI is extant throughout business ecosystems, CISOs are just beginning to address it. The report found that only 22% of organizations have dedicated onboarding processes for evaluating AI vendors. The result: third-party AI tools are embedded in core environments without oversight.
Compounding the problem, just 21% of CISOs have a comprehensive, tested crisis response plan, and only 22% say their organization is fully prepared to meet upcoming compliance requirements related to third-party cyber risk. Over three-quarters are still on the road towards full compliance.
These data points back up assessments made by other organizations. For example, the WEF reports that only 37% of organizations have processes in place to assess AI tool security before deployment. This lack of safeguards risks introducing vulnerabilities not only into individual IT networks, but the entire ecosystem.
AI Tools Need a Different Vetting Process
According to the report, GRC solution adoption is high, standing at 61%. However, dissatisfaction is even higher: 66% of participants said that their GRC tools are only somewhat effective or not effective at all at managing general third-party cyber risk. When it comes to AI risks, they are even less impressive.
There’s a good reason for this. GRC solutions were never intended to handle continuous, third‐party specific oversight, nor for the complexity of AI tools and vendors. Yet 52% of companies onboard AI vendors using generic processes designed for traditional third parties, despite their increased risks.
Only 22% follow a dedicated policy for onboarding AI vendors, with another 25% using informal or case-by-case evaluations. Larger organizations are slightly more likely to have AI-specific onboarding policies, indicating greater risk awareness.
In consequence, workplaces are adopting black-box AI tools faster than security teams can keep up. As high-risk third-party systems are granted access to IT environments without scrutiny, CISOs are confronted by a dangerous and growing blind spot. Without customized monitoring, organizations may introduce serious vulnerabilities into their environments.
AI Policing AI
Finally, the report shows that AI is increasingly being used to manage vendor risk itself, primarily for streamlining assessments and improving accuracy. Two-thirds of companies have already introduced AI-powered vendor risk assessments, and most of the rest intend to do so. Only 1% say they don’t plan to implement AI-based vendor risk assessment solutions.
As a finding, this is both positive and negative. It’s reassuring to know that organizations are taking steps to close the gaps in their third-party risk management systems. However, adopting AI to manage AI without first establishing dedicated onboarding and monitoring standards could cause more problems than it solves.
When it comes to AI, “The defining factor for 2026 will be scale: attackers will automate creation, while defenders will automate detection,” noted Panorays CTO and Cofounder Demi Ben-Ari in a recent blog post. “The organizations that win will be those that combine automation, visibility, and continuous assessment to manage this new pace of risk.”
AI is the Problem, But it Can Be the Solution
In the company’s new report, Panorays lays bare the extent of the gaps that lie across business security ecosystems. AI vendors, AI tools, and AI workflows are compounding existing vulnerabilities within supply chain risk management. However, it also reveals that CISOs are taking steps to address these gaps and adapt new strategies. Placing AI risks at the forefront of third-party risk management can help security teams develop policies that succeed in protecting their ecosystems.
Also Read: The End Of Serendipity: What Happens When AI Predicts Every Choice?
[To share your insights with us, please write to psen@itechseries.com ]
Comments are closed.