Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Bigger Isn’t Always Better in AI: The Case for Smaller Models

By: Dan Spurling, SVP Product Management, Teradata

In the rapidly evolving world of GenAI, the belief that “bigger is better” is being turned on its head. While Large Language Models (LLMs) like OpenAI’s ChatGPT-4, Anthropic’s Claude, and Meta’s LLaMA 2 are praised for their ability to process or generate vast amounts of broadly applicable data with generally insightful outputs, they often stumble with the precision required in specific business environments.

Also Read: From Innovation to Infiltration: The New Cyber Threat Landscape

Smaller, open, or partly open-source domain specific language models, such as those offered on Hugging Face, offer significant advantages. These models are often called Small Language Models (SLMs) and can be fine-tuned to handle specific types of data and outputs.  SLMs provide precision insights, while allowing for scalable compute power that reduces both costs and environmental impact. Crucially, these models can be deployed on your own infrastructure, at the edge, or in the cloud – providing flexibility that larger, hosted LLMs simply cannot. This approach ensures cost-effective AI deployment, minimizes IT expenses, and guards against vendor lock-in.

Smaller, more focused models can deliver more deterministic outputs. Research from DeepMind, in the article Training Compute-Optimal Large Language Models, indicates that these models hallucinate less frequently than their larger counterparts.  Other articles like “Mini-Giants: ‘Small’ Language Models and Open Source Win-Win,” highlight the “mini-giant” SLMs offer superior controllability and affordability – despite having fewer parameters.  Leveraging the right open-source or self-hosted SLMs is crucial for maintaining control over data and compliance costs. Companies with trusted, domain-specific data should consider building or fine-tuning SLMs tailored to their business needs. These specialized models not only enhance internal operations but also present a new revenue stream by offering differentiated model capabilities tailored for specific industries or verticals.

Simply put, given increased accuracy opportunities, reduced cost and environmental benefits, and foregoing the complexity of building or fine-tuning LLMs, it is both prudent and beneficial to leverage the smallest language model necessary.

Related Posts
1 of 11,644

Adopting smaller, open-source SLMs can provide strategic benefits. A recent Forrester survey revealed that 46% of AI leaders plan to integrate open-source models into their AI strategies. These models offer flexibility and agility, which is essential in the modern competitive landscape. Industry specific examples include:

  • Regulatory Compliance: SLMs can efficiently flag emails or documents that may impact regulatory compliance, running in-parallel on the same data platform to minimize additional cost and complexity.
  • Healthcare: SLMs can analyze doctors’ notes, allowing healthcare providers to focus more on patient care while also minimizing movement of sensitive data.
  • Retail: AI-based product recommendation solutions can be built around SLMs that are trained on business-owned data and are combined with open-source SLMs, allowing for greater clustering and vector similarity, enhancing accuracy and elevating the customer experience.
  • Customer Complaint Analysis: SLMs that are coupled with other industry standard models for next-best-action can analyze complaint topics and sentiments, integrating with CRM systems to recommend actions that improve issue resolution while keeping the source data secure.

Today’s modern GenAI-based business applications typically utilize a combination of LLMs, SLMs, and classical machine learning (ML) models – leveraging the necessary model that ensures optimal performance, accuracy, and cost-efficiency. In these applications, LLMs play a key role, but the smaller, more focused models offer significant advantages in precision, cost, and control. Businesses should strategically adopt LLMs and SLMs in the appropriate ways, to enhance their AI capabilities, drive success, and explore new revenue opportunities. The future of AI is not just about going big—it is about being smart, agile, and sustainable.

Also Read: AI helps Data Engineers be Distinguished Data Engineers

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.