Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
Tech

Google I/O was an AI evolution, not a revolution

At Google’s I/O Developer Conference, the company explained to developers — and to some extent consumers — why its AI bets are ahead of its competitors. At the event, the company revealed a revamped version AI-powered search engine, an AI model with an expanded pop-up of 2 million tokens, AI assistants in its Workspace suite of apps, like Gmail, Drive and Docs, tools to integrate its AI in developer apps and even a future vision for An AI, named Project Astra, that can respond to sight, sounds, voice and text combined.

While each advancement in itself was promising, the onslaught of AI news has been overwhelming. Although obviously aimed at developers, these major events are also an opportunity to impress end users with technology. But after the flood of news, even somewhat tech-savvy consumers may be wondering: wait, what is Astra again? Is this the thing that powers Gemini Live? Is Gemini Live a bit like Google Lens? How is it different from Gemini Flash? Does Google actually make AI glasses or is it vaporware? What is Gemma, what is LearnLM… what are Gems? When does Gemini arrive in your inbox, documents? How can I use these things?

If you know the answers to these questions, congratulations, you’re a TechCrunch reader. (If not, click the links to catch up.)

Image credits: Google

What was missing from the overall presentation, despite the enthusiasm of individual presenters or the cheers of Google employees in the crowd, was any sense of the impending AI revolution. If AI ultimately results in a product that will have a profound impact on the direction of technology in the same way that the iPhone impacted personal computing, that’s not when it will makes his debut.

Instead, the takeaway is that we are still in the early days of AI development.

On the sidelines of the event, there was a feeling that even Googlers knew that the work was unfinished. When demonstrating how AI could compile a student’s study guide and quiz within moments of uploading a several hundred page document – ​​an impressive feat – we noticed that responses on the quiz were not annotated with the cited sources. Asked about accuracy, one employee admitted that the AI ​​was doing things correctly and that a future version would point to sources so people could verify its answers. But if you have to check the facts, how reliable is an AI study guide for preparing for the test in the first place?

In the Astra demo, a camera mounted on a table and connected to a large touchscreen lets you do things like play Pictionary with the AI, show it objects, ask questions about those objects, have it tell a story And much more. But the use cases for how these capabilities will apply to everyday life were not obvious, despite the technical advances that, in themselves, are impressive.

For example, you can ask the AI ​​to describe objects using alliteration. During the live-streamed keynote speech, Astra saw a set of colored pencils and responded “cheerfully colored creative pencils.” A nice party trick.

When we challenged Astra in a private demo to guess the object in a scribbled drawing, he immediately correctly identified the flower and house I had drawn on the touchscreen. When I drew an insect – a larger circle for the body, a smaller circle for the head, little legs on the sides of the larger circle – the AI ​​stumbled. Is it a flower? No. Is it the sun? No. The employee guided the AI ​​to guess something that was alive. I added two more legs for a total of eight. Is it a spider? Yes. A human would have immediately seen the bug, despite my lack of artistic abilities.

No, you weren’t supposed to record. But here is a similar demo published on X.

To give you an idea of ​​the current state of the technology, Google staff did not allow recording or taking photos in the Astra demo room. Astra also worked on an Android smartphone, but you couldn’t see the app or hold the phone. The demos were fun and the technology that made them possible is certainly worth exploring, but Google missed an opportunity to show how its AI technology will impact your daily life.

When are we going to have to ask an AI to come up with a group name based on a picture of your dog and a stuffed tiger, for example? Do you really need AI to help you find your glasses? (These were other Astra demos from the keynote.)

Image credits: Google demo video (Opens in a new window)

This isn’t the first time we’ve seen a tech event filled with demonstrations of an advanced future without real-world applications or ones that present conveniences as larger upgrades. Google, for example, has also introduced its AR glasses in previous years. (He even parachuted skydivers into I/O wearing Google Glass, a project built over a decade ago that has since been abandoned.)

After watching I/O, I get the impression that Google is looking at AI as another way to generate additional revenue: pay for Google One AI Premium if you want to upgrade its products. Maybe then Google won’t make the first big breakthrough in mainstream AI. As Sam Altman, CEO of OpenAI, recently stated, the original idea of ​​OpenAI was to develop technology and “create all kinds of benefits for the world.”

“Instead,” he said, “it now seems like we’re going to create AI and other people will then use it to create all sorts of amazing things that will benefit us all.”

Google seems to be in the same boat.

Still, there were times when Google’s Astra AI looked more promising. If he could correctly identify code or make suggestions on how to improve a system based on a diagram, it would be easier to see how he could be a useful workmate. (Clippy, evolved!)

Gemini in Gmail.
Image credits: Google

There have been other times where the real-world practicality of AI has also come through. A better search tool for Google Photos, for example. Additionally, having Gemini’s AI in your inbox to summarize emails, draft responses, or list action items could help you finally get to inbox zero, or some approximation of that, faster. But can it eliminate your unwanted but not spam emails, intelligently organize emails into labels, ensure you never miss an important message, and offer insight into everything in your inbox on which you must act as soon as you log in. ? Can it summarize the most important news from your email newsletters? Not enough. Not yet.

Additionally, some of the more complex features, such as AI-driven workflows or receipt organization that were demoed, won’t roll out to labs until September.

Reflecting on the impact of AI on the Android ecosystem (Google’s pitch for developers in attendance), one gets the feeling that even Google cannot yet argue that AI will help Android users to s move away from the Apple ecosystem. “When is the best time to switch from iPhone to Android?” we asked Googlers of all ranks. “This fall” was the general response. In other words, Google’s fall hardware event, which is expected to coincide with Apple’s adoption of RCS, an SMS upgrade that will make Android messaging more competitive with iMessage.

Simply put, consumer adoption of AI in personal computing devices may require new hardware developments – perhaps AR glasses? a smarter connected watch? Pixel Buds powered by Gemini? – but Google isn’t ready to reveal its hardware updates or even tease them just yet. And, as we’ve already seen, with the disappointing launches of Ai Pin and Rabbit, the hardware is still struggling.

Image credits: Google

While much can be done with Google’s AI technology on Android devices today, Google accessories like the Pixel Watch and the system that powers it, WearOS, have been largely neglected on the I/O front , beyond some minor performance improvements. Its Pixel Buds earbuds didn’t even get screams. In Apple’s world, these accessories help lock users into its ecosystem and could one day connect them to an AI-powered Siri. These are essential elements of its overall strategy, not optional add-ons.

Meanwhile, we feel like we’re waiting for the other shoe to drop: that is, Apple’s WWDC. The tech giant’s Worldwide Developers Conference promises to unveil Apple’s own AI agenda, perhaps through a partnership with OpenAI… or even Google. Will it be competitive? How is this possible if AI can’t integrate deeply into the operating system, like Gemini does on Android? The world is waiting for Apple’s response.

With a hardware event in the fall, Google has time to review Apple’s launches and then attempt to create its own AI moment as powerful and as immediately understandable as Steve’s introduction of the iPhone Jobs: “An iPod, a phone and Internet access.” communicator. An iPod, a phone… do you have it?

People understood it. But when will they get Google’s AI in the same way? Not from this I/O, at least.

We are launching a newsletter on AI! Register here to start receiving it in your inboxes on June 5.

techcrunch

Back to top button