Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Legit Security Releases Industry’s First AI Discovery Capabilities

By discovering developers’ use of AI, security teams gain broader visibility and control as part of a comprehensive AppSec program

Legit Security, the leading application security posture management (ASPM) platform that enables secure application delivery, announced the availability of the cybersecurity industry’s first AI discovery capabilities. With these new capabilities, Legit helps bridge the gap between security and development by enabling CISOs and AppSec teams to understand where and when AI code is used and take action to ensure proper security controls are in place – without slowing software delivery.

As developers harness the power of AI and large language models (LLMs) to develop and deploy capabilities more quickly, new risks arise. For example, AI-generated code may contain unknown vulnerabilities or flaws that put the entire application at risk. In addition, AI-generated code can introduce legal issues if copyright restrictions are in place. Another risk is improper implementation of AI features, which can lead to data exposure, such as customers bypassing prompt protections and extracting sensitive data. Despite all this, security teams rarely understand how developers use AI-generated code, resulting in security blind spots that impact both the organization and the software supply chain.

Recommended AI News: Sumsub Unveils Industry-First Deepfake Detection in Video Identification

“There’s still a huge disconnect between what CISOs and their teams believe to be true and what is actually happening on the ground in development. This belief gap is particularly acute when it comes to understanding how, when, and why AI technology is used by developers,” said Dr. Gary McGraw, co-founder of the Berryville Institute of Machine Learning (BILM) and author of Software Security. “In our recent BIML publication ‘An Architectural Risk Analysis of Large Language Models’ we identified 81 LLM risks, including a critical top ten – none of which can be mitigated without thorough understanding of where AI is used to deliver code.”

Legit’s platform enables security leaders, including CISOs, product security leaders, and security architects, to gain comprehensive visibility into risks across the development pipeline from the infrastructure to the application layer. With a crystal-clear view of the development lifecycle, customers ensure the code deployed is traceable, secure, and compliant. These new AI code discovery capabilities bolster the platform by closing a significant visibility gap that allows security to take preventive actions, decrease the risk of legal exposure, and ensure compliance.

Related Posts
1 of 40,855

“AI offers huge potential to enable developers and organizations to deliver and innovate faster, but it is important to understand whether such decisions introduce risk,” said Liav Caspi, co-founder and chief technology officer at Legit Security. “Our aim is to ensure nothing stops developers from delivering while providing security and the confidence they have visibility and control into the usage of AI and LLMs. We already helped some of our customers see where and how AI is used, which was new information for the team.”

Recommended AI News: Proxima Introduces Pioneering AI-Powered Customer Support Solution

Legit’s AI code discovery capabilities provide a range of benefits to both security and development teams, including:

  • Discovery of AI-generated code: Legit provides a full view of the development environment, including code derived from AI-generated coding tools (e.g., GitHub Copilot).
  • Full visibility of the dev environment: By gaining a full view of the application environment, including repositories using LLM, MLOps services, and code generation tools, Legit’s platform offers the context necessary to understand and manage an application’s security posture.
  • Security policy enforcement: Legit Security detects LLM and GenAI development and enforces organizational security policies, such as ensuring all AI-generated code gets reviewed by a human.
  • Real-time notifications of GenAI code: Legit can immediately notify security teams when users install AI code generation tools, providing greater transparency and accountability.
  • Protect against releasing vulnerable code: Legit’s platform provides guardrails to prevent the deployment of vulnerable code to production, including that delivered via AI tools.
  • Alert on LLM risks: Legit scans LLM application’s code for security risks, such as prompt injection and insecure output handling.

Recommended AI News: Cardboard Spaceship Lands $1 Million for LaunchPad Ventures to Drive Licensed Content Curation

[To share your insights with us as part of editorial or sponsored content, please write to sghosh@martechseries.com]

Comments are closed.