[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

IBM Expands Granite Model Family with New Multi-Modal and Reasoning AI Built for the Enterprise

IBM Corporation logo. (PRNewsfoto/IBM Corporation)

  • Granite 3.2 – small AI models offering reasoning, vision, and guardrail capabilities with a developer friendly license

  • Updated Granite time series models that offer long-range forecasting with less than 10M parameters

IBM today debuted the next generation of its Granite large language model (LLM) family, Granite 3.2, in a continued effort to deliver small, efficient, practical enterprise AI for real-world impact.

Also Read: AiThority Interview with Jie Yang, Co-founder and CTO of Cybever

All Granite 3.2 models are available under the permissive Apache 2.0 license on Hugging Face. Select models are available today on IBM watsonx.ai, Ollama, Replicate, and LM Studio, and expected soon in RHEL AI 1.5 – bringing advanced capabilities to businesses and the open-source community. Highlights include:

  • A new vision language model (VLM) for document understanding tasks which demonstrates performance that matches or exceeds that of significantly larger models – Llama 3.2 11B and Pixtral 12B – on the essential enterprise benchmarks DocVQA, ChartQA, AI2D and OCRBench1. In addition to robust training data, IBM used its own open-source Docling toolkit to process 85 million PDFs and generated 26 million synthetic question-answer pairs to enhance the VLM’s ability to handle complex document-heavy workflows.
  • Chain of thought capabilities for enhanced reasoning in the 3.2 2B and 8B models, with the ability to switch reasoning on or off to help optimize efficiency. With this capability, the 8B model achieves double-digit improvements from its predecessor in instruction-following benchmarks like ArenaHard and Alpaca Eval without degradation of safety or performance elsewhere2. Furthermore, with the use of novel inference scaling methods, the Granite 3.2 8B model can be calibrated to rival the performance of much larger models like Claude 3.5 Sonnet or GPT-4o on math reasoning benchmarks such as AIME2024 and MATH500.3
  • Slimmed-down size options for Granite Guardian safety models that maintain performance of previous Granite 3.1 Guardian models at 30% reduction in size. The 3.2 models also introduce a new feature called verbalized confidence, which offers more nuanced risk assessment that acknowledges ambiguity in safety monitoring.
Related Posts
1 of 41,457

IBM’s strategy to deliver smaller, specialized AI models for enterprises continues to demonstrate efficacy in testing, with the Granite 3.1 8B model recently yielding high marks on accuracy in the Salesforce LLM Benchmark for CRM.

The Granite model family is supported by a robust ecosystem of partners, including leading software companies embedding the LLMs into their technologies.

“At CrushBank, we’ve seen first-hand how IBM’s open, efficient AI models deliver real value for enterprise AI – offering the right balance of performance, cost-effectiveness, and scalability,” said David Tan, CTO, CrushBank. “Granite 3.2 takes it further with new reasoning capabilities, and we’re excited to explore them in building new agentic solutions.”

Granite 3.2 is an important step in the evolution of IBM’s portfolio and strategy to deliver small, practical AI for enterprises. While chain of thought approaches for reasoning are powerful, they require substantial compute power that is not necessary for every task. That is why IBM has introduced the ability to turn chain of thought on or off programmatically. For simpler tasks, the model can operate without reasoning to reduce unnecessary compute overhead. Additionally, other reasoning techniques like inference scaling have shown that the Granite 3.2 8B model can match or exceed the performance of much larger models on standard math reasoning benchmarks. Evolving methods like inference scaling remains a key area of focus for IBM’s research teams.4

Also Read: AiThority Interview with James Alger, Co-founder and COO of Qloo

Alongside Granite 3.2 instruct, vision, and guardrail models, IBM is releasing the next generation of its TinyTimeMixers (TTM) models (sub 10M parameters), with capabilities for longer-term forecasting up to two years into the future. These make for powerful tools in long-term trend analysis, including finance and economics trends, supply chain demand forecasting and seasonal inventory planning in retail.

“The next era of AI is about efficiency, integration, and real-world impact – where enterprises can achieve powerful outcomes without excessive spend on compute,” said Sriram Raghavan, VP, IBM AI Research. “IBM’s latest Granite developments focus on open solutions demonstrate another step forward in making AI more accessible, cost-effective, and valuable for modern enterprises.”

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.