[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Open GenAI Models Proven Secure for Enterprise Adoption, New Evaluation Shows

logo

Security scores for leading open-source models jumped from 1.8% to 99.6% after applying targeted guardrails, outperforming closed models in enterprise-grade tests

Related Posts
1 of 42,231

A new evaluation led by LatticeFlow AI, in collaboration with SambaNova, provides the first quantifiable evidence that open-source GenAI models, when equipped with proper risk guardrails, can meet or exceed the security levels of closed models, making them suitable for implementation in a wide range of use cases, including highly-regulated industries such as financial services.

Also Read: AiThority Interview with Tim Morrs, CEO at SpeakUp

“At LatticeFlow AI, we provide the deepest technical controls to evaluate GenAI security and performance,” said Dr. Petar Tsankov, CEO and Co-Founder of LatticeFlow AI.

The evaluation assessed the top five open models, measuring their security before and after applying guardrails to block malicious or manipulative inputs. The security scores of the open models jumped from as low as 1.8% to 99.6%, while maintaining above 98% quality of service, demonstrating that with the right controls, open models are viable for secure, enterprise-scale deployment.

​​Rethinking Open-Source GenAI for Enterprise Adoption

Many companies are actively exploring open-source GenAI to gain flexibility, reduce vendor lock-in, and accelerate innovation. But despite growing interest, adoption has often stalled. The reason: a lack of clear, quantifiable insights into model security and risk.

The evaluations released address that gap, providing the technical evidence needed to make informed decisions about whether and how to deploy open-source models securely.

“Our customers — from leading financial institutions to government agencies— are rapidly embracing open-source models and accelerated inference to power their next generation of agentic applications,” said Harry Ault, Chief Revenue Officer at SambaNova. “LatticeFlow AI’s evaluation confirms that with the right safeguards, open-source models are enterprise-ready for regulated industries, providing transformative advantages in cost efficiency, customization, and responsible AI governance.”

“At LatticeFlow AI, we provide the deepest technical controls to evaluate GenAI security and performance,” said Dr. Petar Tsankov, CEO and Co-Founder of LatticeFlow AI. “These insights give AI, risk, and compliance leaders the clarity they’ve been missing, empowering them to move forward with open-source GenAI safely and confidently.”

Key Findings from the Evaluation

LatticeFlow AI evaluated five widely used open foundation models:

  • Qwen3-32B
  • DeepSeek-V3-0324
  • Llama-4-Maverick-17B-128E-Instruct
  • DeepSeek-R1
  • Llama-3.3-70B-Instruct

Each model was tested in two configurations:

  1. Base model, as typically used out-of-the-box
  2. Guardrailed model, enhanced with a dedicated input filtering layer to block adversarial prompts

The evaluation focused on cybersecurity risks, simulating enterprise-relevant attack scenarios (such as prompt injection and manipulation) to measure each model’s resilience and its impact on usability.

Key results:

  • DeepSeek R1: from 1.8% to 98.6%
  • LLaMA-4 Maverick: from 33.5% to 99.4%
  • LLaMA-3.3 70B Instruct: from 51.8% to 99.4%
  • Qwen3-32B: security score increased from 56.3% to 99.6%
  • DeepSeek V3: from 61.3% to 99.4%

All models maintained over 98% quality of service, confirming that security gains did not compromise user experience

Why This Matters for Financial Institutions

As GenAI moves from experimentation to deployment, enterprises face growing scrutiny from regulators, boards, and internal risk teams. Models must now be auditable, controllable, and provably secure.

This evaluation provides transparent, quantifiable evidence that open-source models can meet enterprise-grade security expectations with the right risk mitigation strategies.

Also Read: Cognitive Product Design: Empowering Non-Technical Users Through Natural Language Interaction With AI-Native PLM

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.