News Net Daily

Everything announced at Google I/O 2024, including Gemini AI, Project Astra, Android 15 and more

Everything announced at Google I/O 2024, including Gemini AI, Project Astra, Android 15 and more

At the end of I/O, Google’s annual developer conference at Shoreline Amphitheater in Mountain View, Google CEO Sundar Pichai revealed that the company had said “AI” 121 times. That was essentially the gist of Google’s two-hour keynote: integrating AI into every Google app and service used by more than two billion people around the world. Here are all the major updates announced by Google during the event.

Everything announced at Google I/O 2024, including Gemini AI, Project Astra, Android 15 and more

Google

Google has announced a brand new AI model called Gemini 1.5 Flash, which it says is optimized for speed and efficiency. Flash sits between Gemini 1.5 Pro and Gemini 1.5 Nano, which is the company’s smallest model that runs locally on the device. Google said it created Flash because developers wanted a lighter, less expensive model than Gemini Pro for building AI-based apps and services while retaining some elements, like a long, million-token pop-up window. which differentiates Gemini Pro from competing models. Later this year, Google will double Gemini’s pop-up to two million tokens, meaning it will be able to process two hours of video, 22 hours of audio, more than 60,000 lines of code or more than 1.4 million words at the same time. .

Google

Google introduced Project Astra, an early version of a universal AI-powered assistant that Google DeepMind CEO Demis Hassabis said was Google’s version of an AI agent “that can be useful in everyday life.”

In a video that Google says was shot in a single take, an Astra user walks around Google’s London office holding his phone and pointing the camera at various things – a speaker, code on a whiteboard and through a window – and has a natural conversation with the app about what that looks like. In one of the most impressive moments of the video, the user correctly points out where she left her glasses before without the user ever putting the glasses back on.

The video ends with a twist: when the user finds and wears the missing glasses, we learn that they have an on-board camera system and are able to use the Astra project to continue a conversation in a manner transparent with the user, perhaps indicating that Google could work. on a competitor to Meta’s Ray Ban smart glasses.

Google

Google Photos was already smart when it came to finding specific images or videos, but with AI, Google is taking it to the next level. If you’re a Google One subscriber in the US, you’ll be able to ask Google Photos a complex question like “show me the best photo of every national park I’ve visited” when the feature rolls out in the coming months . Google Photos will use GPS information along with its own judgment about what’s “best” to present you with options. You can also ask Google Photos to generate captions for posting photos on social media.

Google

Google’s new AI-based media creation engines are called Veo and Imagen 3. Veo is Google’s answer to OpenAI’s Sora. It can produce “high quality” 1080p videos that can be “longer than a minute,” Google said, and can understand cinematic concepts like a timelapse.

Imagen 3, meanwhile, is a text-to-image generator that Google says handles text better than its previous version, Imagen 2. The result is the company’s highest-quality text-to-image template with a “incredible level of detail” for “photorealistic, realistic images” and fewer artifacts – essentially pitting them against OpenAI’s SLAB-3.

Google

Google is making big changes to how search fundamentally works. Most of the updates announced today, such as the ability to ask very complex questions (“Find the best yoga or Pilates studios in Boston and view details of their introductory offers and walking time from Beacon Hill.”) and using research to plan meals and earned vacations. ‘will only be available if you sign up for Search Labs, the company’s platform that lets users try experimental features.

But an important new feature that Google calls AI Overviews, and which the company has been testing for a year now, is finally rolling out to millions of people in the United States. Google Search will now present AI-generated answers in addition to results by default, and the company says it will bring the feature to more than 1 billion users worldwide by the end of the year .

Google

Google integrates Gemini directly into Android. When Android 15 comes out later this year, Gemini will know what app, image, or video you’re running, and you’ll be able to view it as an overlay and ask it context-specific questions. Where does that leave Google Assistant which already does this? Who knows! Google didn’t mention it at all during today’s keynote.

There have been many other updates as well. Google said it would add digital watermarks to AI-generated videos and text, make Gemini accessible in the Gmail and Docs sidebar, power an AI virtual teammate in Workspace, listen to phone calls, and detect if you are actually being scammed. time, and much more.

Stay informed of all the news from Google I/O 2024 directly here!

News Source : www.engadget.com
Gn tech

Exit mobile version