Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Galileo Releases the First LLM Evaluation, Experimentation and Observability Platform for Building Trustworthy Production-Ready LLM Applications

Galileo LLM Studio Now Provides a Continuous Feedback Loop Across the LLM Application Development Lifecycle to Drive Constant Output and Data Improvements With the Introduction of Its Newest Module, Monitor

Galileo, a leading machine learning (ML) company for unstructured data, announced the general availability of Galileo LLM Studio, a platform for building trustworthy LLM applications and getting them into production faster. The platform has Prompt and Fine-Tune modules and now has a third module, Monitor, that provides a continuous feedback loop for developers and data scientists. All three modules leverage Galileo’s Guardrail Metrics Store where users can find unique evaluation metrics created by Galileo’s research team which enhance developer productivity and provide robust hallucination detection or build their own custom metrics.

AiThority Interview Insights: AiThority Interview with Matthew Tillman, Co-Founder and CEO at OpenEnvoy

“Enterprises face multiple challenges in operationalizing generative AI ranging from prompt engineering to managing model performance to quantify the impact of these models. Businesses seeking to translate the foundational large language models that are taking the world by storm into enterprise-ready applications need to take all of these aspects into account,” said Hyoun Park, CEO and Principal Analyst at Amalgam Insights.

Founded by former Apple, Google and Uber AI product and engineering leaders, Galileo launched in 2022 with the first ML data intelligence platform for unstructured data which is now being used by startups to the Fortune 500. The platform supports Natural Language Processing (NLP) and Computer Vision (CV) and now LLMs as part of the company’s mission to unlock the value of unstructured data (text, image, speech and more) which accounts for more than 80% of the world’s data .

As organizations of all sizes and across industries begin to consider the potential applications of generative AI, it is more important than ever for organizations to have governance frameworks in place to minimize the risk of LLM hallucinations in a scalable and efficient manner.
“There is a strong need for an evaluation toolchain across prompting, fine-tuning and production monitoring to proactively mitigate hallucinations. Galileo’s LLM Studio offers exactly that toolchain. Highly recommend it to all LLM builders!” said Waseem Alshikh, co-founder and CTO of Writer, a leading generative AI platform company.

Related Posts
1 of 40,970

Read More about AiThority Interview : AIThority Interview with David Lambert, VP & GM, Strategy & Growth, APAC, Medallia

“We’ve spent the last year speaking with enterprises working to bring LLM-based applications to production and three things became radically clear. First, companies of all sizes now have LLM powered applications in production. Second, LLM output evaluation is painfully manual with no guardrails against hallucinations. Third, teams are looking for sophisticated metric-driven monitoring for their applications in production. This need for LLM evaluation, experimentation and observability was core to our latest release,” said Vikram Chatterji, co-founder and CEO of Galileo.

Galileo LLM Studio has three modules:

  • Prompt helps teams find the ‘right’ prompt fast and lets teams collaboratively build, evaluate and experiment to find prompts that perform well and minimize hallucinations.
  • Fine-Tune lets users fine-tune LLMs using the right context and their own unique data. Galileo’s evaluation metrics make it easy to identify the data that is pulling model performance down.
  • Monitor powers the continuous feedback loop needed to drive constant prompt and data improvements by giving users a set of observability tools and evaluation metrics to monitor application performance in production, such as cost, latency and hallucinations.

 Latest AiThority Interview Insights : AiThority Interview with Rebecca Clyde, Co-founder and CEO of Botco.ai

 [To share your insights with us, please write to sghosh@martechseries.com] 

Comments are closed.