[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

JFrog Unveils Secure AI Model Delivery Accelerated by NVIDIA NIM Microservices

JFrog Ltd the Liquid Software company and creators of the JFrog Software Supply Chain Platform, is announcing general availability of its integration with NVIDIA NIM microservices, part of the NVIDIA AI Enterprise software platform. The JFrog Platform is the only unified, end-to-end, and secure DevSecOps and MLOps solution with native NVIDIA NIM integration. This enables rapid deployment of GPU-optimized, p*********** machine learning (ML) models, and large language models (LLMs) to production with enterprise-grade security, increased visibility, and governance controls. This unified infrastructure enables developers to create and deliver AI-powered applications with greater efficiency and peace of mind.

Also Read: How the Art and Science of Data Resiliency Protects Businesses Against AI Threats

“Performance and security are crucial for successful enterprise AI deployments”

“The demand for secure and efficient AI implementations continues to rise, with many businesses aiming to expand their AI strategies in 2025. However, AI deployments often struggle to reach production due to significant security challenges,” said Gal Marder, Chief Strategy Officer at JFrog. “AI-powered applications are inherently complex to secure, deploy, and manage, and concerns around the security of open-source AI models and platforms continue to grow. We’re excited to collaborate with NVIDIA to deliver an easy-to-deploy, end-to-end solution that enables companies to accelerate the delivery of their AI/ML models with enterprise-grade security, compliance, and provenance.”

With the rise and accelerated demand for AI in software applications, data scientists and ML engineers face significant challenges when attempting to scale their enterprise ML model deployments. The complexities of integrating AI workflows with existing software development processes—coupled with fragmented asset management, security vulnerabilities, and compliance issues—can lead to lengthy, costly deployment cycles and, often, failed AI initiatives. According to IDC, by 2028, 65% of organizations will use DevOps tools that combine MLOps, LLMOps, DataOps, CloudOps, and DevOps capabilities to optimize the route to AI value in software delivery processes.

“The rise of open source MLOps platforms has made AI more accessible to developers of all skill levels to quickly build amazing AI applications, but this process needs to be done securely and in compliance with today’s quickly evolving government regulations,” said Jim Mercer, IDC’s Program Vice President, Software Development, DevOps & DevSecOps. “As enterprises scale their generative AI deployments, having a central repository of p***********, fully compliant, performance-optimized models developers can choose from and quickly deploy while maintaining high levels of visibility, traceability, and control through the use of existing DevSecOps workflows is compelling.”

Related Posts
1 of 41,369

The JFrog integration with NVIDIA NIM enables enterprises to seamlessly deploy and manage the latest foundational LLMs – including Meta’s Llama 3 and Mistral AI – while maintaining enterprise-grade security and governance controls throughout their software supply chain. JFrog Artifactory – the heart of the JFrog Platform – provides a single solution for hosting and seamlessly managing all software artifacts, binaries, packages, ML Models, LLMs, container images, and components throughout the software development lifecycle. By integrating NVIDIA NIM into the JFrog Platform developers can easily access NVIDIA NGC – a hub for GPU-optimized deep learning, ML, and HPC models. This provides customers with a single source of truth for software models and tools, while leveraging enterprise DevSecOps best practices to gain visibility, governance, and control across their software supply chain.

Also Read: What Generative AI Regulations Can Mean for Businesses?

The JFrog Platform update provides AI developers and DevSecOps teams with multiple benefits, including:

  • Unified ML & DevOps Workflows: Data Scientists and ML Engineers can now version, secure, and deploy models using the same JFrog DevSecOps software development workflows they already know and trust. This eliminates the need for teams to use separate ML tools while ensuring automated compliance checks, audit trails, and governance of ML Models using JFrog Curation.
  • End-to-End Security & Integrity: Implement continuous security scanning across containers, AI models and dependencies – delivering contextual insights across NIM microservices – to identify vulnerabilities, supplemented by smart threat detection that focuses on real risks and proactive protection against compromised AI models and packages.
  • Exceptional Model Performance and Scalability: Optimized AI application performance using NVIDIA accelerated computing infrastructure, offering low latency and high throughput for scalable deployment of LLMs to large-scale production environments. Easily bundle ML models with dependencies to reduce external requirements and utilize existing workflows for seamless AI deployment. Additionally, the JFrog Platform offers flexible deployment options for increased scalability, including self-hosted, multi-cloud, and air-gap deployments.

“Performance and security are crucial for successful enterprise AI deployments,” said Pat Lee, vice president, Enterprise Strategic Partnerships, NVIDIA. “With NVIDIA NIM integrated directly into the JFrog Platform, developers can accelerate AI adoption with a unified, end-to-end solution for building, deploying, and managing production AI agents at scale.”

Also Read: AiThority Interview with Brian Stafford, President and Chief Executive Officer at Diligent

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

 

Comments are closed.