Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Galileo Releases New Hallucination Index Revealing Growing Intensity in LLM Arms Race

The second annual Index, which ranks the top 22 leading language models lists Anthropic’s Claude 3.5 Sonnet as the best performing model across all tasks 

Galileo, a leader in developing generative AI for the enterprise, today announced the launch of its latest Hallucination Index, a Retrieval Augmented Generation (RAG)-focused evaluation framework, which ranks the performance of 22 leading Generative AI (Gen AI) large language models (LLMs) from brands like OpenAI, Anthropic, Google, and Meta.

This year’s Index added 11 models to the framework, representing the rapid growth in both open- and closed-source LLMs in just the past 8 months. As brands race to create bigger, faster, and more accurate models, hallucinations remain the main hurdle to deploying production-ready Gen AI products.

Also Read: AiThority Interview with Wendy Gonzalez, CEO of Sama

Which LLM Performed the Best

The Index tests open-and closed-sourced models using Galileo’s proprietary evaluation metric, context adherence, designed to check for output inaccuracies and help enterprises make informed decisions about balancing price and performance. Models were tested with inputs ranging from 1,000 to 100,000 tokens, to understand performance across short (less than 5k tokens), medium (5k to 25k tokens), and long context (40k to 100k tokens) lengths.

Related Posts
1 of 40,970
  • Best Overall Performing Model: Anthropic’s Claude 3.5 Sonnet. The closed-source model outpaced competing models across short, medium, and long context scenarios. Anthropic’s Claude 3.5 Sonnet and Claude 3 Opus consistently scored close to perfect scores across categories, beating out last year’s winners, GPT-4o and GPT-3.5, especially in shorter context scenarios.
  • Best Performing Model on Cost: Google’s Gemini 1.5 Flash. The Google model ranked the best performing for the cost due to its great performance on all tasks.
  • Best Open Source Model: Alibaba’s Qwen2-72B-Instruct. The open source model performed best with top scores in the short and medium context.

“In today’s rapidly evolving AI landscape, developers and enterprises face a critical challenge: how to harness the power of generative AI while balancing cost, accuracy, and reliability. Current benchmarks are often based on academic use-cases, rather than real-world applications. Our new Index seeks to address this by testing models in real-world use cases that require the LLMs to retrieve data, a common practice in enterprise AI implementations,” says Vikram Chatterji, CEO and Co-founder of Galileo. “As hallucinations continue to be a major hurdle, our goal wasn’t to just rank models, but rather give AI teams and leaders the real-world data they need to adopt the right model, for the right task, at the right price.”

Also Read: Cryptocurrency Hacking Has Become A Significant Threat

Key Findings and Trends:

  • Open-Source Closing the Gap: Closed-source models like Claude-3.5 Sonnet and Gemini 1.5 Flash remain the top performers thanks to proprietary training data, but open-source models, such as Qwen1.5-32B-Chat and Llama-3-70b-chat, are rapidly closing the gap with improvements in hallucination performance and lower-cost barriers than their closed-source counterparts.
  • Overall Improvements with Long Context Lengths: Current RAG LLMs, like Claude 3.5 Sonnet, Claude-3-opus and Gemini 1.5 pro 001 perform particularly well with extended context lengths — without losing quality or accuracy — reflecting the progress being made with both model training and architecture.
  • Large Models Are Not Always Better: In certain cases, smaller models outperform larger models. For example, Gemini-1.5-flash-001 outperformed larger models, which suggests that efficiency in model design can sometimes outweigh scale.
  • From National to Global Focus: LLMs from outside of the U.S. such as Mistral’s Mistral-large and Alibaba’s qwen2-72b-instruct are emerging players in the space and continue to grow in popularity, representing the global push to create effective language models.
  • Room for Improvement: While Google’s open-source Gemma-7b performed the worst, their closed-source Gemini 1.5 Flash model consistently landed near the top.

Context Adherence uses a proprietary method created by Galileo Labs called ChainPoll to measure how well an AI model adheres to the information it is given, helping spot when AI makes up information that is not in the original text.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.