Tech

Google announces Gemma 2, a 27 B parameter version of its open model, launched in June

On Tuesday, Google announced a number of new additions to Gemma, its family of open (but not open source) templates comparable to Meta’s Llama and Mistral open templates, at its annual Google I/O 2024 developer conference.

The version making headlines here is Gemma 2, Google’s next generation of open-weighted Gemma models, which will launch with a 27 billion parameter model in June.

Already available is PaliGemma, a pre-trained Gemma variant that Google describes as “the first vision language model in the Gemma family” for image captioning, image labeling, and image use cases. visual questions and answers.

Until now, the standard Gemma models, launched earlier this year, have only been available in 2 billion parameter and 7 billion parameter versions, making this new 27 billion model a real step forward. Before.

In a briefing ahead of Tuesday’s announcement, Josh Woodward, vice president of Google Labs, noted that Gemma models have been downloaded more than “millions of times” across the various services where they are available. He pointed out that Google optimized the 27B model to run on Nvidia’s next-generation GPUs, a single Google Cloud TPU host, and the Vertex AI managed service.

Size doesn’t matter if the model isn’t good. Google hasn’t shared much data on Gemma 2 yet, so we’ll have to see how it performs once developers get their hands on it. “We are already seeing great quality. It outperforms models twice the size it already is,” Woodward said.

We are launching a newsletter on AI! Sign up here to start receiving it in your inboxes on June 5.

Read more about Google I/O 2024 on TechCrunch

News Source : techcrunch.com
Gn tech

Back to top button