Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Introducing Gemma: Google’s Lightweight Open Source Llama Challenge

Google’s Lightweight Open Source Llama Challenge

Building on the work of the Gemini models, Google has introduced the Gemma family of open models. With two sizes to choose from, the Gemma 2B and 7B open models come with pre-trained and instruction-tuned options, respectively.
Across all of the benchmarks, Gemma outperforms Meta’s LLM, Llama 2. For example, Gemma’s 7 billion parameter model outperforms Lama 2 in reasoning, math, and other categories, boasting a general accuracy of 64.3%.

With the free tier for Colab notebooks and free access in Kaggle, users can begin working with Gemma now. Furthermore, $300 in credits are available to first-time users of Google Cloud. Google Cloud credits, which researchers can request for in amounts up to $500,000, can also be used to speed up studies.
Two variants of Gemma will be available: Gemma 2B with 2 billion parameters and Gemma 7B with 7 billion parameters. There are pre-trained and instruction-tuned variations for each size that are released. In addition, Gemma now comes with a new Responsible Generative AI Toolkit that gives you all the tools you need to make AI apps that are safer to use.

Read: 10 AI In Energy Management Trends To Look Out For In 2024

Among the other features are:

Inference and supervised fine-tuning (SFT) toolchains for native Keras 3.0 in all major frameworks: JAX, PyTorch, and TensorFlow.
Gemma’s integration with popular tools like Hugging Face, MaxText, Nvidia NeMo, and TensorRT-LLM, as well as its ready-to-use Colab and Kaggle notebooks, making it easy to get started.
With simple deployment on Vertex AI and Google Kubernetes Engine (GKE), you can execute pre-trained and instruction-tuned Gemma models on your laptop, desktop, or Google Cloud.
The best performance in the market is guaranteed by optimizing across many AI hardware platforms, such as Google Cloud TPUs and Nvidia GPUs.

Related Posts
1 of 39,955

Read: How to Incorporate Generative AI Into Your Marketing Technology Stack

All organizations, regardless of size, are allowed responsible commercial usage and distribution per the terms of use.
Several benchmarks, such as MMLU, HellaSwag, and HumanEval, show that Gemma performs better than Llama 2.Gemma is natively Keras 3.0 compliant, thus it can be used with TensorFlow, PyTorch, and JAX, which will help it gain widespread use. In this edition, you may get pre-built notebooks for Kaggle and Colab, as well as integration with widely used tools like Hugging Face, MaxText, NVIDIA NeMo, and TensorRT-LLM.

Read: Intel’s Automotive Innovation At CES 2024

When optimized for NVIDIA GPUs and Google Cloud TPUs, Gemma models achieve industry-leading performance. However, they can run on a variety of platforms, including workstations, laptops, and Google Cloud. Following Google’s recent introduction of Gemini 1.5, which boasts the largest ever seen in NLP models—a 1 million token context window—this advancement follows suit. The context windows of GPT-4 Turbo and Claude 2.1 are 128K and 200K, respectively.

[To share your insights with us as part of editorial or sponsored content, please write to sghosh@martechseries.com]

Comments are closed.