Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Arthur Debuts Arthur Shield The First Firewall for Large Language Models like ChatGPT

 Arthur, an AI monitoring platform trusted by some of the largest organizations in the world to ensure that their AI systems are well-managed and deployed in a responsible manner, introduced a powerful addition to its suite of AI monitoring tools: Arthur Shield — the first firewall for large language models (LLMs). This patented new technology enables companies to deploy LLM applications like ChatGPT faster and more safely within an organization, helping to identify and resolve issues before they become costly business problems — or worse, result in harm to their customers.

Latest Insights: AiThority Interview with Luke Butler and Laura Plunkett, Executive Director, Startup Engagement at Comcast NBCUniversal

Recent advancements in large language models from OpenAI, Google, Meta, and others have spurred a rush of companies across industries to integrate LLMs into their operations. However, along with the incredible power of this new technology come significant risks and safety issues. Arthur Shield enables companies to deploy LLMs more safely by detecting and then blocking key risks, such as: PII or sensitive data leakage, toxic, offensive, or problematic language generation, and outputting incorrect responses, also known as “hallucinations.” The platform also detects and stops malicious user prompts — including attempts to get the model to generate a response that would not reflect well on the business, efforts to get the model to return sensitive training data, or attempts to bypass safety controls.

Related Posts
1 of 40,704

“LLMs are one of the most disruptive technologies since the advent of the Internet. Yet, as with all new technologies, these advancements pose numerous potential risks to both companies and the public,” said Adam Wenchel, co-founder and CEO of Arthur. “Arthur has created the tools needed to deploy this technology more quickly and securely, so companies can stay ahead of their competitors without exposing their businesses or their customers to unnecessary risk.”

Recommended: ChatGPT’s Rival has Arrived: Hugging Face Introduces Open-Source Version of ChatGPT

Arthur is currently used by industry leaders like Humana, the Department of Defense (DoD), Expel, Axios HQ, and three of the top five US banks to address critical issues faced by AI developers, such as accuracy, explainability, and fairness. Additionally, Arthur has secured co-selling agreements with major tech companies, including Google, Amazon, and Microsoft.

By leveraging Arthur’s platform, companies across sectors have not only been able to protect their customers and ensure their AI complies with strict regulatory requirements, they have also saved hundreds of millions of dollars in operating expenses while achieving significant model-driven revenue growth.

Comments are closed.