[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Coalition for Secure AI Releases Two Actionable Frameworks for AI Model Signing and Incident Response

Join a Technical Committee - OASIS Open

OASIS Open Project Delivers Practical Tools to Build Trust and Defend AI Systems at Scale

OASIS Open, the international open source and standards consortium, announced the release of two critical publications advancing AI security practices from the Coalition for Secure AI (CoSAI), an OASIS Open Project. These new resources provide practical frameworks to help organizations strengthen the security and trustworthiness of their AI systems. CoSAI’s Software Supply Chain Security for AI Systems Workstream released “Signing ML Artifacts: Building towards tamper-proof ML metadata records” and the Preparing Defenders for a Changing Cybersecurity Landscape Workstream published “AI Incident Response Framework V1.0.” Together, these frameworks address key aspects of the full lifecycle of AI assurance, from preventing tampering before deployment to responding effectively when systems are attacked.

Model Signing: Building Trust in AI Supply Chains

Workstream 1’s publication, “Signing ML Artifacts,” addresses one of the most pressing challenges in AI deployment: verifying the authenticity and integrity of AI models before integrating them into mission-critical systems. As AI becomes woven into critical business processes, the question is no longer whether to implement model signing, but how quickly organizations can move to adopt it. Workstream 1’s guidance offers both the technical depth and implementation roadmap needed to accelerate adoption while ensuring interoperability across the AI ecosystem and maintaining the security, trust, and compliance their businesses demand.

“Model signing delivers tangible business value: reduced security risk, streamlined compliance, and increased stakeholder trust. This framework gives enterprises the tools to confidently deploy AI while maintaining visibility and control over their most valuable ML assets throughout their entire lifecycle,” said the Workstream 1 Leads, Andre Elizondo of Wiz, Matt Maloney of Cohere, and Jay White of Microsoft.

Also Read: AiThority Interview Featuring: Pranav Nambiar, Senior Vice President of AI/ML and PaaS at DigitalOcean

The publication introduces a staged maturity model designed to help organizations adopt model signing effectively, beginning with establishing basic artifact integrity through digital signatures, ensuring that models can be verified against unauthorized changes. It then advances to incorporating signature chaining and lineage, which create clear provenance trails and enable traceability across the entire AI supply chain. Finally, it integrates structured attestations and policy controls to support comprehensive AI governance frameworks that align with organizational security and compliance requirements.

Related Posts
1 of 42,405

AI Incident Response: Preparing Defenders for Evolving Threats

AI systems face unique threats including data poisoning, model theft, prompt injection, and inference attacks that traditional incident response frameworks aren’t designed to handle. Workstream 2’s “AI Incident Response Framework V1.0” equips security practitioners with comprehensive, AI-specific guidance to detect, contain, and remediate these emerging threats.

“AI adoption is reshaping enterprise security, and operationalizing incident response with rapidly changing technology presents new challenges,” said Vinay Bansal of Cisco and Josiah Hagen of Trend Micro, CoSAI’s Workstream 2 Leads. “This framework presents incident examples over common AI use cases and provides playbooks specific to new risks in AI systems, helping organizations move from theory to practice.”

The framework complements existing guidance by addressing capabilities and gaps unique to AI. It helps defenders minimize the impact of AI exploitation while maintaining auditability, resiliency, and rapid recovery, even against sophisticated threats. The guide also tackles the complexities of agentic AI architectures, emphasizing forensic investigation and providing concrete steps to prioritize security investments, scale mitigation strategies, implement layered defenses, and navigate AI governance challenges.

Industry Collaboration and Impact

Together, these publications – developed from the collaborative efforts of CoSAI’s more than 40 industry partners, including Premier Sponsors EY, Google, IBM, Microsoft, NVIDIA, Palo Alto Networks, PayPal, Snyk, Trend Micro, and Zscaler – build on and reinforce CoSAI’s broader initiatives, including the recent Strategic Update, the donation of Google’s Secure AI Framework (SAIF), and the Principles for Secure-by-Design Agentic Systems.

Technical contributors, researchers, and organizations are welcome to participate in its open source community and support its ongoing work. OASIS welcomes additional sponsorship support from companies involved in this space. Contact join@oasis-open.org for more information.

Also Read: The End Of Serendipity: What Happens When AI Predicts Every Choice?

[To share your insights with us, please write to psen@itechseries.com ]

Comments are closed.