Tech

The Pixel 9 doesn’t solve my biggest problem with Google’s AI vision

I still haven’t found generative AI to be a regular use case, even though Gemini has recently taken over all of Google’s products. With the Pixel 9 series, AI is more important than ever, but I find myself frustrated by the same fundamental flaw in generative AI that even the Pixel 9 can’t escape.


This issue of 9to5Google Weekender is part of the relaunched 9to5Google newsletter, which highlights the biggest Google stories with additional commentary and other insights. Sign up here to get it delivered to your inbox first!


From the beginning, and especially in light of the use of generative AI to “improve” search, my problem with generative AI has always been that the technology is often wrong. You can get a correct answer 10 times, but in my eyes, it’s not worth it if you get an answer later that spits out incorrect information as if it were absolute fact.

It’s a problem that generative AI, at least Google’s, hasn’t really been able to solve yet.

Misinformation is the obstacle that seems to affect Google’s AI ambitions more than others. You can’t think of it without thinking of “glue pizza.”

On the Pixel 9 series, Google is putting AI front and center, and even after just a week of use, I had already encountered several examples of this confident misinformation reminding me why I barely ever use AI tools.

An example from this week was an afternoon meeting. The restaurant I was supposed to meet at was closed at the time we agreed to meet (around 3pm), so on the way there I asked if there were any other options near that first location that were actually open. The question was specifically, “Can you show me any other restaurants that are open near (insert restaurant name)?”

Instead, Gemini spat out the place I literally just mentioned.

When I corrected the AI ​​by asking what options were open, it simply repeated the same location. I then reiterated that the location indicated was closed, to which, guess what, it repeated the same thing.

Meanwhile, 20 seconds of opening and searching Google Maps gave me the answer I needed with much less frustration.

The goal of generative AI in an assistant is supposed to be to get better results that can be reasoned about, not just referenced. But Gemini continues to fail in this aspect far too often. In this example, it’s not that Gemini didn’t have the information available. In fact, it even directly reported the times of the location. It simply failed to to use this information.

Gemini had all the information it needed to provide me with the information I needed at 2:45pm, it just ignored it.

I noticed the same thing with Pixel Screenshots.

One of the things that excited me about this app was the ability to take screenshots of online orders and ask the AI ​​to find estimated arrival dates or find the date I placed an order. But it’s pretty bad at this. For example, I asked it to find the date I purchased a shirt—an order I placed on August 15th and had taken a screenshot of the checkout page—but it instead displayed a completely unrelated Amazon return and claimed I ordered a shirt on September 9th, weeks later. Further attempts after adding more screenshots to the app were no better. At one point she said I ordered the shirt on August 19th, the day I asked the question, while the screenshot below shows the app saying I placed the order on August 9th, a few days before I started using the phone and a date not referenced in any screenshots I had taken.

This one is a little easier to forgive, though. The Pixel Screenshots app only uses the information provided in the screenshots. It can’t tap into tracking numbers, it can’t read my email, so the information it has is limited. Ultimately, I have no problem with that. If it can’t do it, I get it. It’s not particularly easy!

I still can’t figure out where this date comes from.

The problem is that he confidently gives an answer that may be correct or it may be wildly wrong.

This is why I just don’t see the point of things like Gemini Live. Google’s AI conversational assistant is undoubtedly impressive in the way it speaks, in the way it reacts to what you say. But that won’t stop it from spitting out wildly incorrect information that it presents as absolute fact. Heck, in his experience with the Pixel 9 this week, our Andrew Romero ran into an issue with Gemini Live where the AI ​​insisted he was planning a trip to Florida even though he repeatedly said it wasn’t going to happen. This is clearly an AI hallucination, but it really reinforces the lack of trust I have in these products.

Until these issues are addressed, I think there’s a gaping hole in Google’s vision for AI in the Pixel 9 series that tiny caveats can’t possibly fill. We can argue about the practicality and whether anyone even wants these features, but the fact is that they exist. I just want to be able to have some level of confidence in them.


Top stories of the week

Pixel 9 Review

Following its announcement earlier this month, our first reviews of the Google Pixel 9 series are here. Stay tuned for a camera-focused look at the Pixel 9 Pro XL, a review of the Pixel 9 Pro Fold, and reviews of the upcoming Pixel Watch 3 and Buds Pro 2.

Other featured articles


From the rest of 9to5

9to5Mac: AirPods Max 2 Coming Soon: Here’s What to Expect

9to5Toys: Amazon Officially Announces Its Next Prime Big Deal Days Event for October

Electrek: Tesla Semi Spotted in Europe, But Why?

Follow Ben: Twitter/XDiscussion threads and Instagram

FTC: We use income generating automatic affiliate links. More.

Back to top button