GitLab and Google Cloud Partner to Expand AI-Assisted Capabilities with Customizable Gen AI Foundation Models
GitLab the most comprehensive, scalable enterprise DevSecOps platform for software innovation, and Google Cloud announced an extension of its strategic partnership to deliver secure AI offerings to the enterprise.
GitLab is trusted by more than 50% of the Fortune 100 to secure and protect their most valuable assets, and leads with a privacy-first approach to AI. By leveraging Google Cloud’s customizable foundation models and open generative AI infrastructure, GitLab will provide customers with AI-assisted features directly within the enterprise DevSecOps platform.
Latest Insights: NVIDIA Introduces NeMo Guardrails to Enable Safety & Security for LLMs
GitLab is working with Google Cloud because of its strong commitment to privacy and enterprise readiness, and its leadership in AI. With generative AI support in Vertex AI, GitLab can tune Google’s foundation models with their own data, and leverage these models to deliver new generative AI powered experiences. Google Cloud allows customers to control their data with enterprise-grade capabilities such as data isolation, data protection, sovereignty, and compliance support.
With Vertex AI, and leveraging the Built with Google Cloud AI program, GitLab will be able to leverage Google’s foundation models to provide customers with AI-powered offerings within its cloud infrastructure. This allows GitLab to maintain its commitment to protecting user privacy by containing customer intellectual property and source code within GitLab’s cloud infrastructure.
GitLab plans to improve its customers’ DevSecOps workflow efficiency by 10x, by applying AI-assisted workflows to all users involved in delivering software value. By implementing AI-powered capabilities throughout the software development lifecycle, GitLab delivers value across the enterprise, enabling faster business transformation, without sacrificing security or privacy.
This first experimental feature leveraging Google Cloud’s generative AI models is called Explain this Vulnerability. This capability empowers companies to make security a cross-organizational effort, by providing users with a natural language description of vulnerabilities found in their code and a recommendation for how to resolve them at the time of detection. Explain this Vulnerability can be used by developers, as well as security and operations teams, allowing customers to stay secure while remaining efficient and improving speed to delivery.
Recommended: AiThority Interview with Mary-Lou Smulders, Chief Marketing Officer at Dedrone
Explain this Vulnerability joins GitLab’s experimental-level features such as Explain this Code, Summarize Issue Comments, and Summarize Merge Request Changes, and its existing AI-enabled features, Code Suggestions, and Suggested Reviewers, which are focused on driving developer productivity beyond code development, and improving workflow automation for all users throughout the software development lifecycle.
GitLab’s 2023 DevSecOps Report: Security Without Sacrifices found that developers are increasingly using AI for testing and security – with 62% of developers using AI/ML to check code, up from 51% in 2022. Additionally, 36% of developers use AI/ML for code review, up from 31% the previous year. GitLab is focused on creating privacy-first solutions that enable enterprises and other highly regulated organizations to adopt AI/ML throughout the software development lifecycle.
Latest Insights: Data Annotation as the Key to Military Mastery and National Security
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.