Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Risk in Focus: What Enterprises Should Implement, Before Adopting Open-Source AI

By Susannah Shattuck, Head of Product, Credo AI

When proponents discuss AI’s potential to deliver unprecedented, outsized value, they’re really standing on the shoulders of giants who have developed the open-source software that has enabled researchers and builders to catalyze this technological revolution. As the Cybersecurity & Infrastructure Security Agency (CISA) recently put it, “It’s safe to say that many innovations of the digital age would not have been possible without OSS.” That’s because open-source software (OSS) operates in 90% of commercial software and

generates returns far exceeding the price of its inputs. Harvard Business School suggests OSS costs $4.15 billion per year in development and maintenance, but creates $8.8 trillion in value.

AI is no exception to this rule, as recent advances in the open-source AI space mean that developers no longer need to choose between performance and open source. LLaMa 3.1, for example, outperforms GPT-4o and Claude 3.5 Sonnet on several benchmarks. The fact that open source is keeping pace with closed, proprietary models is great news for innovation. It also moves the needle on transparency; open-source models are more easily interrogated through model weights and training data, serving as a powerful resource for academia, and can level the playing field between incumbents and challengers. Greater transparency further lets an ecosystem of researchers, innovators, and security experts test the substance and integrity of models, enhancing their capabilities and safeguards.

Also Read: What is Return on AI – and How Do Companies Measure It

But while the magnifying potential of AI and OSS converges productively across most use cases and concerns, this nexus is a critical focus for risk management as data breaches are top of mind for consumers. According to the U.S. Internet Crime Complaint Center, the reported number of data breaches tripped between 2021 and 2023, suggesting recent trailblazing technological developments have not hampered malicious activity. It’s not far-flung to consider even more comprehensive breaches in a more AI-driven future, where automated technologies are responsible for even more data—and even more sensitive information. AI’s iterative nature may also make vulnerabilities harder to unwind, augmenting the technological cost of remediation.

Faced with that threat level, enterprises across industries should seriously evaluate the forms of legal recourse that AI models do or don’t deliver; open-source AI models almost invariably offer little, or no, liability protection compared to commercial alternatives. Couple security risks with IP risks—as open-source models may have been trained on copyrighted data—and licensing risks, and the potential costs of open-source AI snowball.

This doesn’t mean enterprises should jettison OSS in the age of AI. If anything, many commercial models harbor some of the same risks as OSS—such as using copyrighted data—though many are less transparent about shortcomings. OSS already operates in the majority of commercial software; that figure is unlikely to change, which should encourage enterprise teams to calculate how they can deploy internal resources to bring promising open-source AI tools up to enterprise-grade levels of security and consistency, while also ensuring that these tools advance the enterprise’s sensitive data.

Related Posts
1 of 18,860

It’s important to distinguish models that are “free to be used” from those that are developed in an open, participatory way. Transparency about which models you’re using—and what kind of data was used to train those models, how they behave, and what their potential shortcomings and vulnerabilities are is critical to mitigating ethical AI risks. Enterprises have three primary tools at their disposal while considering—and then integrating—open-source AI into their operations.

First, integrate thorough testing into AI processes and adoption. When using any kind of LLM, teams must thoroughly test its behavior in the context in which it will be deployed; this is especially critical for open-source models, which may not have undergone the same kind of rigorous testing or red-teaming as closed-source models. That includes evaluating the security standards of the foundations supporting the OSS; some fail to even abide by “basic API security patterns,” opening their users to major threats, including from state-sponsored hackers.

Also Read: AI and Big Data Governance: Challenges and Top Benefits

Second, make adaptability a core part of the framework. The landscape of AI assessment tools is constantly evolving. Numerous open-source packages currently exist that explore dataset and model characteristics alongside dimensions like fairness, security, privacy, and performance. Moreover, academic research in these fields is a thriving endeavor that continuously generates new concepts and frameworks. In short, measurement best practices are changing rapidly, and the challenge of assessing AI systems is far from being solved. This, combined with the rapid development of AI models themselves, requires an assessment framework to be highly adaptable.

Third, design and implement rigorous governance processes. Because open-source AI is so widely available, it’s all the more critical to have rigorous governance processes in place to address shortcomings and vulnerabilities—which can easily be discovered and exploited by bad actors. Tracking models and model versions across applications and projects can help ensure timely compliance with governance demands, especially in the face of a dynamic technological and regulatory environment. Changes to underlying models require further re-testing. AI governance is critical to address the new and emerging risks that most organizations don’t have processes to deal with already, in addition to shoring up the existing governance processes (like security and privacy workflows) to address new concerns related to generative AI.

Also Listen: AI Inspired Series by AiThority.com: Featuring Bradley Jenkins, Intel’s EMEA lead for AI PC & ISV strategies

AI governance and managing risk is clearly coming more to the forefront across Big Tech boardrooms, which will ultimately deliver benefits for industry and society. Ultimately, whether open-source AI delivers outsized benefits—or creates a costly crucible of vulnerable tools—for industry and society writ large will depend on how enterprises interact with open-source communities. Dovetailing OSS’s scrappiness and efficiency with commercial-grade vigilance and resources can sustain a collaborative ethos in the AI era, while also continuing to ensure democratic access to secure, paradigm-shifting tools.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.