Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

AI21 Launches Jamba: The Most Powerful and Efficient Long-Context Models for Enterprises

AI21 Labs (PRNewsfoto/AI21 Labs)

Two new, high-performance open models to offer enterprises unmatched quality and latency, alongside the largest context window.

AI21, a leader in building foundation models and AI systems for the enterprise, today announced the release of two powerful new openly available models: Jamba 1.5 Mini and Jamba 1.5 Large.

Thanks to their groundbreaking architecture, both Jamba models stand out as the fastest and most efficient in their respective size classes, even surpassing models like Llama 8B and 70B.

Also Read: Humanoid Robots And Their Potential Impact On the Future of Work

Building on the success of the original Jamba model, these latest improvements to the Jamba family represent a significant leap forward in long-context language models, delivering unparalleled speed, efficiency, and performance across a broad spectrum of applications.

AI21 has pioneered a novel approach to large language model development, seamlessly merging the strengths of Transformer and Mamba architectures. This hybrid approach overcomes the limitations of both, ensuring high-quality, accurate responses while maintaining exceptional efficiency, even with expansive context windows – something typically unattainable with traditional Transformer models.

The culmination of this innovative architectural strategy is Jamba 1.5 Large, a sophisticated Mixture-of-Experts (MoE) model with 398B total parameters and 94B active parameters. Representing the pinnacle of the Jamba family, this model is engineered to tackle complex reasoning tasks with unprecedented quality and efficiency.

Jamba 1.5 Mini: Enhanced Performance and Expanded Capabilities

AI21 is also introducing Jamba 1.5 Mini, a refined and enhanced version of Jamba-instruct. This model boasts expanded capabilities and superior output quality. Both models are meticulously designed for developer-friendliness and optimized for crafting Agentic AI systems, supporting features such as function calling and tool use, JSON mode, structured document objects, citation mode, and more.

Jamba Redefines LLM Performance

Both Jamba models utilize an impressive true context window of 256K tokens, the largest currently available under an open license. Unlike many long-context models, Jamba models fully utilize their declared context window, as evidenced by the new RULER benchmark. This benchmark evaluates long-context models on tasks such as retrieval, multi-hop tracing, aggregation, and question answering – areas where Jamba excels – demonstrating a high effective context length with consistently superior outputs.

Related Posts
1 of 41,434

In rigorous end-to-end latency tests against similar models – Llama 3.1 70B, Llama 3.1 405B, and Mistral Large 2 – Jamba 1.5 Large outperformed competitors, achieving the lowest latency rate. In large context windows, it proved twice as fast as competitive models. Similar results were observed when comparing Jamba 1.5 Mini against Llama 3.1 8B, Mistral Nemo 12B, and Mistral-8x7B, further highlighting its efficiency advantage.

Also Listen: AI Inspired Series by AiThority.com: Featuring Bradley Jenkins, Intel’s EMEA lead for AI PC & ISV strategies

“We believe the future of AI lies in models that truly utilize extensive context windows, especially for complex, data-heavy tasks. Jamba 1.5 Mini and 1.5 Large offer the longest context windows on the market, pushing the boundaries of what’s possible with LLM-based applications,” said Or Dagan, VP of Product, Foundation Models at AI21. “Also, our breakthrough architecture allows Jamba to process vast amounts of information with lightning-fast efficiency. Jamba’s combination of optimized architecture, unprecedented speed, and the largest available context window make it the optimal foundation model for developers and enterprises building RAG and agentic workflows.”

Industry Partnerships to Help Power Enterprise AI Adoption

AI21 is proud to partner with Amazon Web Services (AWS), Google Cloud, Microsoft Azure, Snowflake, Databricks, and NVIDIA in this major release. These collaborations ensure enterprises can seamlessly deploy and leverage the Jamba family of foundation models within secure, controlled environments tailored to their specific needs. The Jamba family of models will also be available on Hugging Face, Langchain, LlamIndex, and Together.AI.

AI21 is also proud to collaborate with Deloitte. “AI21’s ability to deploy their models in private environments and offer hyper-customized training solutions is becoming increasingly important to our enterprise clients,” said Jim Rowan, principal and Head of AI, Deloitte Consulting LLP. “Together, we will pair AI21’s innovative approach to LLMs with our knowledge in delivering cutting-edge AI capabilities and tailored solutions to drive significant value for our clients.”

Additionally, we’re thrilled to be featured by two leading independent model benchmarking sites: Artificial Analysis and LMSYS Chatbot Arena.

“Jamba 1.5 Mini and Large from AI21 Labs offer clear advantages for inference workloads utilizing long input prompts. In independent benchmarking by Artificial Analysis, both Jamba 1.5 Mini and Large demonstrated leading performance on prompt lengths of 10,000 tokens and above compared to other models with comparable scores across our quality evaluations,” said Micah Hill-Smith, Co-founder & CEO, Artificial Analysis. “In our performance test for prompts with 10,000 tokens, Jamba 1.5 Mini achieves above 150 output tokens per second, higher than the median tested output speed for any other model we benchmark.”

“By joining forces with these industry leaders on Jamba 1.5, we’re providing a powerful, user-friendly platform that helps to democratize AI, making it accessible, scalable, and transformative for both individuals and organizations across all industries,” added Dagan.

AI21 is on a mission to create real world value by designing AI systems that are purpose built for the enterprise. With the Jamba family of models, AI21 continues to lead the industry in developing innovative solutions that realize the potential of AI. To deploy AI21’s Jamba AI models within your organization,

Also Read: AiThority Interview with Anand Pashupathy, Vice President & General Manager, Security Software & Services Division, Intel

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.