Google's open models : Gemma

Google has released their advanced set of LLMs under the title Gemma. These are several LLM models which are lightweight, sophisticated and can achieve a wide variety of tasks.

What is Gemma ?

Gemma as of today are actually 4 LLMs.

Traditionally, fine-tuning an LLM involves feeding it data related to a specific task (like summaries of articles, or question-answer pairs). Instruction fine-tuning takes this a step further.

Gemma 7B is a really strong model, with performance comparable to the best models in the 7B weight, including Mistral 7B. Gemma 2B is an interesting model for its size, but it doesn’t score as high in the leaderboard as the best capable models with a similar size, such as Phi 2. We are looking forward to receiving feedback from the community about real-world usage!

Using Gemma

You can use gemma today via Google cloud or hugging face where it is hosted for free.

You can also play with the Gemma demo here: https://huggingface.co/chat/

Where can Gemma be used ?

Being open source I think Gemma could be a great choice for many companies who wish to use LLMs in their flows. While there are many existing alternatives to Gemma such as Mistral and Phi, Gemma being supported by the might of Google could be a great and reliable choice.

Can Gemma write code ?

Yes. Gemma is capable of writing computer code. Though it might not be as powerful as Gemini Advances.

Can Gemma generate images ?

No. These are large language models and are not capable of generating images.

Can Gemma summarize text ?

Yes. Gemma models can summarize text.

Where can I play with Gemma models ?

The simplest would be to use Colab notebooks with Google. You can start using these models right away in your code. See the getting started guide.

0
Subscribe to my newsletter

Read articles from Wiseland AI Engineering Team directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Wiseland AI Engineering Team
Wiseland AI Engineering Team