Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

AI-generated Code: The Fourth Component of Software Development

As organisations race to harness the potential of AI-generated code, Synopsys cautions businesses to beware of the hidden risks lurking within these innovative technologies. Contrary to assumptions, GenAI tools may inherit security and quality issues from the code they are trained on. Drawing parallels to the early days of open source software adoption, Jim Ivers, Vice President, Marketing, Synopsys Software Integrity Group shares an article on this, through the complexities of GenAI adoption and evolving landscape of software development.

There is enormous attention on generative AI (GenAI) and its potential to change software development. While the full impact of GenAI is yet to be known, organizations are eagerly vetting the technology and separating the hype from the real, pragmatic benefits. In parallel, software security professionals are closely watching the practical impact of GenAI and how application security testing (AST) must adapt as adoption increases.

Overview of AI-generated code

AI-generated code (and AI coding assistants) will revolutionize software development, becoming the fourth major component of software, alongside proprietary, third-party commercial, and open source components.

However, since the LLMs powering AI coding assistants are trained on publicly available software (including open source software), organizations can’t assume that AI-generated code is perfect. It can inherit the security and quality issues present in the code it was trained on, and result in license violations and potential IP risks when it is copied from open source. As in the early days of open source, fear of these risks is slowing the adopt of AI- generated code and preventing organizations from realizing its full potential.

Until the advent of GenAI, software was composed of three types of components.

  • The code you wrote.
  • The code you bought.
  • The code you used from open source.

As organizations consider using GenAI coding assistants, the most prudent position is to view AI-generated code as a fourth type of component, with its own benefits and risks.

The mistaken presumption that AI produces clean code

GenAI uses trained, deep-learning large language models (LLMs) built from massive amounts of code collected from internet websites, forums, repositories, and open-source projects. GenAI tools like ChatGPT and Copilot utilise those LLMs to translate a human-like command into code. As the hype for GenAI grew, there was a growing presumption that the code used to build the LLMs would be free from licensing and vulnerability issues, and therefore the LLMs would produce code free of bugs and flaws.

In fact, the opposite is true: Studies such as the “Open Source Security and Risk Analysis” (OSSRA) report show that codebases contain numerous vulnerabilities and licensing issues, with the 2024 edition reporting vulnerabilities in 84% of scanned codebases and 53% with licensing conflicts.

If GenAI tools are learning from existing codebases—like those scanned in the OSSRA report—it is highly likely these tools will bring these problems into generated code. Furthermore, technology advancements are quickly followed by people who look to exploit new weaknesses, and tradecraft to contaminate LLMs has already surfaced. Organizations should not presume that GenAI coding assistants will produce pristine code free of risk. It must be tested like any other code.

The Role of AST in AI

The fundamental truth is that all codes have flaws and bugs, and using GenAI will not change that. GenAI and AST are not mutually exclusive, and AST is a necessary enabling agent for AI adoption. The essential three testing methodologies (static analysis, dynamic analysis, and SCA) will remain critical to monitor the security and quality of software.

Organizations must use a multi-faceted testing approach to find and fix issues in a timely and efficient manner.

Related Posts
1 of 8,417

In a recent publication, “Predicts 2024: AI & Cybersecurity—Turning Disruption into an Opportunity, Gartner predicts the growing adoption of GenAI, but with several caveats. The hype around eliminating the need for AST solutions gets immediately debunked, as the document notes that “through 2025, generative AI will cause a spike of cybersecurity resources required to secure it, causing more than a 15% incremental spend on application and data security.”

Certainly, AST best practices and deployment methods will need to evolve. Organizations see GenAI as another method to increase development velocity. But to realize that benefit, organizations will need automated AST solutions that are integrated into development workflows and can scale with software development efforts and the potential for higher volumes of code.

A lesson from recent history

At Synopsys, we view GenAI as the next evolutionary step on the AST journey, and history shows that AST can enable organizations looking to gain the benefits promised by new technology. A parallel can be drawn to the early days of open-source software (OSS), when organizations were reluctant to accept the perceived risks of broad open-source usage. Fast forward to today—most applications are composed of 77% or more open-source software.

As OSS began to proliferate, organizations struggled to manage it, track dependencies, and identify potential vulnerabilities. Early adoption of OSS by enterprises was primarily hindered by concerns of licensing and IP protection, with royalty obligations and other licensing issues creating risk.

As OSS usage spread and vulnerabilities were introduced via OSS components, the need to identify and track such vulnerabilities gained attention. In the early days, if a vulnerability was discovered in an open-source component, organizations were unprepared to understand their exposure and know what software needed to be remediated. Excel was the tool of choice for tracking OSS usage, and centralized knowledgebases for OSS vulnerabilities were nascent at best. This left organizations struggling to embrace the efficiencies of open source while managing the risk to their business.

The code produced by GenAI coding assistants carries the same potential for licensing and vulnerability risks. Just as SCA solutions reduce the risk for organizations using OSS, SCA is a crucial component for scanning AI-generated code.

Rising to evolving challenges

The nature of how GenAI learns to deliver code with desired functionality requires evolving AST techniques. A good example is the extracted portions of OSS code called snippets. Already difficult to identify, snippets in code can be readily integrated into LLMs and replicated in GenAI-produced code. If an AI-generated snippet comes from an open-source component with a restrictive license type, the organization is at a legal and compliance risk.

Unfortunately, most SCA tools use filesystem scanning techniques that lack the sophistication to detect snippets. It should be noted that snippets can also include vulnerabilities from the original OSS component, and those vulnerabilities are much more difficult to trace through SCA workflows. Here again is where following AST best practices is critical, as code vulnerabilities should be discoverable through static application security testing (SAST), and runtime vulnerabilities should be discoverable through dynamic application security testing (DAST).

The “essential three” testing programs of SCA, SAST, and DAST remain indispensable components to building trust in your software.

Summary

GenAI will undoubtedly bring a change to software development as the drive to accelerate the code creation continues. As with all “silver bullet” technologies, GenAI will have limitations and pitfalls that will need to be addressed to deliver the benefits they promised. But promises of pristine, secure code that obviates the need for application security testing are at best premature and may prove to be ill-conceived.

Application security testing can provide a path that enables organizations to use this technology while ensuring that AI-generated code does not create real risks to the business. AST can be a catalyst to GenAI adoption, just as it was for OSS. Organizations must evolve their AST policies and processes to ensure they can reap the benefits of GenAI.

[To share your insights with us as part of the editorial and sponsored content packages, please write to sghosh@martechseries.com]

Comments are closed.