WhyLabs Launches LangKit to Make Large Language Models Safe and Responsible
WhyLabs Open-sources a Powerful Technology to Equip Enterprises with Critical Safety Guardrails for LLMs
WhyLabs, the leading observability platform trusted by high-performing teams to control the behavior of AI & data applications, announced LangKit – the observability and safety standard for Large Language Models (LLMs). LangKit enables detection of risks and safety issues in open-source and proprietary LLMs, including toxic language, jailbreaks, sensitive data leakage, and hallucinations.
AiThority Interview Insights: AiThority Interview with Shafqat Islam, Chief Marketing Officer at Optimizely
With LangKit, WhyLabs provides an extensible and scalable approach for solving challenges that many AI practitioners will face when deploying LLMs in production. – Andrew Ng, Managing General Partner of AI Fund
“As more organizations incorporate LLMs into customer-facing applications, reliability, and transparency will be key for successful deployments,” said Andrew Ng, Managing General Partner of AI Fund. “With LangKit, WhyLabs provides an extensible and scalable approach for solving challenges that many AI practitioners will face when deploying LLMs in production.”
“With the emergence of LLMs, the AI community faced a unique phenomenon: our ability to evaluate the performance of this new wave of AI technologies is increasingly challenged. At WhyLabs, we have been working with the industry’s most advanced AI/ML teams for the past year to build an approach for evaluating and monitoring generative models; these efforts culminated in the creation of LangKit,” said Alessya Visnjic, co-founder and CEO at WhyLabs.
With LangKit, AI practitioners extract a critical set of telemetry data from prompts and responses to describe the behavior of an LLM. The WhyLabs Platform enables users to set alert parameters for activity, including malicious prompts, sensitive data, toxic responses, problematic topics, hallucinations, as well as jailbreak attempts. With these alerts and guardrails, application developers can prevent inappropriate prompts, undesirable LLM responses, and violations of LLM usage policies.
“In an era in which AI transitioned from buzzword to vital business necessity, effective use of LLMs is a must. As our team at Tryolabs helps enterprises put this powerful technology into practice, safety remains one of the main blocks for widespread adoption,” said Alan Descoins, CTO at Tryolabs, who specializes in helping enterprises accelerate their adoption of AI. “WhyLabs’ LangKit is a leap forward for LLMOps, providing out-of-the-box tools for measuring the quality of LLM outputs and catching issues before they affect tasks downstream — whether end users, other applications, or even other LLMs. The fact that it’s easily extensible and lets you add your own checks is also a big plus!”
Read More about AiThority Interview: AiThority Interview with Triveni Gandhi, PhD and Responsible AI Lead at Dataiku
“At Symbl.ai we deliver conversation intelligence as a service to builders, so observability is critical for smooth operations and excellent customer experience. Our platform enables experiences powered by both Understanding and Generative AI for which LangKit is critical to enable transparency and governance required across the end-to-end AI stack.,” said Surbhi Rathore, CEO of Symbl.ai, “The WhyLabs Platform provides observability tools for a wide range of AI use cases, and the addition of LLM observability capabilities reduces engineering overhead, and we can address all operational needs with one platform.”
WhyLabs LangKit provides a unified set of telemetry guardrails for safe, reliable, and observable LLM deployments, enabling organizations to:
- Validate and safeguard individual prompts & responses: detect when either a prompt or a response is not compliant with policies and take corrective action
- Evaluate that the LLM behavior is compliant with policy: track LLM performance against a golden set of prompts to detect changes in behavior or policy violations
- Monitor user interactions inside an LLM-powered application: monitor prompts, responses, and user interactions to be alerted about degradations in overall user experience
- Compare and A/B test across different LLM and prompt versions: ensure that changes to the LLM API are not causing a degradation of the customer experience
These capabilities are available in the WhyLabs AI Observability Platform alongside the existing solutions for ensuring responsible model deployment, such as monitoring of embeddings, model performance, and unstructured data drift. Industry leaders like Glassdoor, Airspace, Fortune 500 enterprises, and AI-first startups rely on WhyLabs to prevent issues in production ML models and ensure high-quality customer experience in AI-powered applications.
Latest AiThority Interview Insights : AiThority Interview with Arijit Sengupta, CEO and Founder at Aible
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.