USA

Google AI Previews Search Errors Cause Furor Online

Last week, Google unveiled its biggest search change in years, showcasing new artificial intelligence capabilities that answer people’s questions as part of the company’s bid to catch up with rivals Microsoft and OpenAI .

The new technology has since generated a litany of untruths and errors – including recommending glue in a pizza recipe and ingesting rocks for nutrients – giving Google a black eye and causing an online outcry .

Incorrect answers in the feature, called AI Overview, undermined trust in a search engine that more than two billion people turn to for authoritative information. And while other AI chatbots lie and act weird, the backlash demonstrated that Google is under more pressure to safely integrate AI into its search engine.

The launch also continues a trend of Google experiencing issues with its latest AI features immediately after they are rolled out. In February 2023, when Google announced Bard, a chatbot to combat ChatGPT, it shared incorrect information about the space. The company’s market value subsequently fell by $100 billion.

Last February, the company launched Bard’s successor, Gemini, a chatbot capable of generating images and acting as a voice-controlled digital assistant. Users quickly realized that the system refused to generate images of white people in most cases and drew inaccurate depictions of historical figures.

With each incident, tech industry insiders criticized the company for dropping the ball. But in interviews, financial analysts said Google needed to move quickly to keep pace with its competitors, even if it meant growing pains.

Google “doesn’t have a choice right now,” Thomas Monteiro, an analyst at Investing.com, said in an interview. “Companies need to act very quickly, even if it means skipping a few steps along the way. The user experience will simply have to catch up.

Lara Levin, a Google spokesperson, said in a statement that the vast majority of AI Overview queries resulted in “high-quality information, with links for further web searches.” The AI-generated result from the tool typically appears at the top of a results page.

“Most of the examples we saw were uncommon queries, and we also saw examples that were falsified or that we were unable to reproduce,” she added. The company will use “isolated examples” of problematic responses to refine its system.

Since OpenAI launched its ChatGPT chatbot in late 2022 and became an overnight sensation, Google has been under pressure to integrate AI into its popular apps. But it’s difficult to tame large language models, which learn huge amounts of data mined from the open web – including lies and satirical messages – rather than being programmed like traditional software.

(The New York Times sued OpenAI and its partner Microsoft in December, alleging copyright infringement over news content related to AI systems.)

Google announced AI Overview with much fanfare at its annual developer conference, I/O, last week. For the first time, the company had connected Gemini, its latest big-language AI model, to its most important product, its search engine.

AI Overview combines statements generated from its language models with live link snippets from around the web. He can cite his sources, but doesn’t know when that source is incorrect.

The system was designed to answer more complex and specific questions than traditional search. The result, the company said, was that the public could benefit from everything Gemini could do, eliminating some of the work of searching for information.

But things quickly went wrong and users posted screenshots of problematic examples on social media platforms like X.

AI Overview asked some users to mix non-toxic glue into their pizza sauce to keep the cheese from slipping, a fake recipe it appeared to borrow from an 11-year-old Reddit post meant to be a joke. The AI ​​asked other users to ingest at least one stone a day to get vitamins and minerals – advice that came from a satirical article in The Onion.

As the company’s cash cow, Google Search is “the only property Google needs to keep relevant/trustworthy/useful,” Gergely Orosz, a software engineer with a tech newsletter, wrote on , Pragmatic Engineer. AI insights turn Google search into garbage all over my timeline.

People also shared examples of Google telling users in bold letters to clean their washing machines with “bleach and white vinegar,” a mixture that when combined can create harmful chlorine gas. In a smaller font, it instructed users to clean with one, then the other.

Social media users tried to compete against those who could share Google’s most outlandish answers. In some cases, they falsified the results. A manipulated screenshot appeared to show Google claiming that a good cure for depression was jumping off the Golden Gate Bridge, citing a Reddit user. Ms. Levin, a Google spokeswoman, said the company’s systems had never returned that result.

AI Overview, however, struggled with presidential history, claiming that 17 presidents were white and that Barack Obama was the first Muslim president, according to screenshots posted on X.

It also states that Andrew Jackson graduated from college in 2005.

Kevin Roose reports contributed.

Gn headline
News Source : www.nytimes.com

Back to top button