Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Arize AI Debuts Phoenix, the First Open Source Library for Evaluating Large Language Models

Arize AI, a market leader in machine learning observability, debuted deeper support on the Arize platform for generative AI and a first-of-its-kind open source observability library for evaluating large language models (LLMs) .

The launch comes at a critical moment for the future of AI. Generative AI is fueling a technical renaissance, with models like GPT-4 showing sparks of artificial general intelligence and new breakthroughs and use cases emerging daily. On the other hand, most leading large language models are black boxes that have known issues around hallucination and problematic biases.

Latest Insights: AiThority Interview with Luke Damian, Chief Growth Officer for Applause

Available today, Arize Phoenix is the first open source observability library specifically built to help data scientists evaluate outputs from LLMs like OpenAI’s GPT-4, Google’s Bard, Anthropic’s Claude, and others. Leveraging Phoenix, data scientists can visualize complex LLM decision-making, monitor LLMs when they produce false or misleading results, and narrow in on fixes to improve outcomes.

“A huge barrier in getting LLMs and Generative Agents to be deployed into production is because of the lack of observability into these systems,” says Harrison Chase, Co-Founder of LangChain. “With Phoenix, Arize is offering an open source way to visualize complex LLM decision-making.”

Related Posts
1 of 41,059

Phoenix is a much-appreciated advancement in model observability and production,” says Christopher Brown, CEO and Co-Founder of AI-focused consulting firm Decision Patterns and a former Computer Science lecturer at UC Berkeley. “The integration of observability utilities directly into the development process not only saves time but encourages model development and production teams to actively think about model use and ongoing improvements before releasing to production. This is a big win for management of the model lifecycle.”

“Despite calls to halt AI development, the reality is that innovation will continue to accelerate,” said Jason Lopatecki, CEO and Co-Founder of Arize AI. “Phoenix is the first software designed to help data scientists understand how GPT-4 and LLMs think, monitor their responses and fix the inevitable issues as they arise.”

Phoenix is instantiated by a simple import call in a Jupyter notebook and is built to interactively run on top of Pandas dataframes. The tool works easily with unstructured text and images, with embeddings and latent structure analysis designed as a core foundation of the toolset.

Leveraging Phoenix, data scientists can:

  • Evaluate LLM Tasks: Troubleshoot tasks such as summarization or question/answering to find problem clusters with misleading or false answers.
  • Detect Anomalies: Using LLM embeddings
  • Find Clusters of Issues to Export for Model Improvement: Find clusters of problems using performance metrics or drift. Export clusters for fine-tuning workflows.
  • Surface Model Drift and Multivariate Drift: Use embedding drift to surface data drift for generative AI, LLMs, computer vision (CV) and tabular models.
  • Easily Compare A/B Datasets: Uncover high-impact clusters of data points missing from model training data when comparing training and production datasets.
  • Discover How Embeddings Represent Your Data: Map structured features onto embeddings for deeper insights into how embeddings represent your data.
  • Monitoring Analysis to Pinpoint Issues: Monitor model performance and track down issues through exploratory data analysis.

Comments are closed.