Business

Cats on the moon? Google’s AI tool produces misleading answers that worry experts

Ask Google if cats have been on the moon and it will give you a ranked list of websites so you can discover the answer for yourself.

It now offers an instant response generated by artificial intelligence – which may or may not be correct.

“Yes, astronauts met cats on the Moon, played with them and provided care,” Google’s recently revamped search engine said in response to a question from an Associated Press reporter.

He adds: “For example, Neil Armstrong said, ‘One small step for a man’ because it was the step of a cat. Buzz Aldrin also deployed cats on the Apollo 11 mission.”

None of this is true. Similar errors – some fake funny, some harmful – have been shared on social media since Google this month released AI Previews, a redesign of its search page that frequently places summaries at the top search results.

The new feature has alarmed experts who warn it could perpetuate bias and misinformation and put people seeking help in an emergency at risk.

When Melanie Mitchell, an AI researcher at the Santa Fe Institute in New Mexico, asked Google how many Muslims had served as president of the United States, the company confidently responded with a long-debunked conspiracy theory: “The United States had a Muslim president, Barack Hussein Obama.

Mitchell said the summary supported this claim by citing a chapter from an academic book written by historians. But the chapter was not making a false claim – it was only referring to a false theory.

“Google’s AI system is not smart enough to understand that this quote does not actually support this claim,” Mitchell said in an email to the AP. “Since it is unreliable, I think this AI presentation feature is very irresponsible and should be taken offline.”

Google said in a statement Friday that it was taking “swift action” to correct errors — such as Obama’s lie — that violate its content policies; and use this to “develop broader improvements” that are already being rolled out. But in most cases, Google says the system works as it should thanks to extensive testing before its public release.

“The vast majority of AI insights provide high-quality information, with links to explore further across the web,” Google said in a written statement. “Most of the examples we saw were uncommon queries, and we also saw examples that were faked or that we couldn’t reproduce.

Also Read: Elon Musk’s XAI Close to Deal Valuing the Startup at $24 Billion

Errors made by AI language models are difficult to reproduce, in part because they are inherently random. They work by predicting which words would best answer the questions they are asked based on the data they were trained on. They tend to make things up – a widely studied problem known as hallucination.

The AP tested Google’s AI feature with several questions and shared some of its answers with subject matter experts. When asked what to do in case of a snake bite, Google gave an “incredibly comprehensive” answer, said Robert Espinoza, a biology professor at California State University, Northridge, who is also president of the American Society of Ichthyologists and Herpetologists.

But when people turn to Google with an urgent question, the likelihood that the answer the tech company gives them will contain a hard-to-notice error is a problem.

“The more stressed, rushed or rushed you are, the more likely you are to accept the first answer that comes out,” said Emily M. Bender, professor of linguistics and director of the Computational Linguistics Laboratory at the University of Washington. “And in some cases, these can be life-threatening situations.”

That’s not Bender’s only concern — and she’s been warning Google about it for several years. When Google researchers published a paper in 2021 titled “Rethinking Search” that proposed using AI language models as “domain experts” who could answer questions with authority – much like they are currently doing so – Bender and his colleague Chirag Shah responded with an article explaining why this was a bad idea.

They warned that such AI systems could perpetuate the racism and sexism found in the massive amounts of written data they were trained on.

“The problem with this kind of misinformation is that we’re swimming in it,” Bender said. “So people are likely to have their prejudices confirmed.” And it’s harder to spot misinformation when it confirms your biases.

Another concern was deeper: ceding information seeking to chatbots was degrading the serendipity of the human search for knowledge, literacy about what we see online, and the value of connecting in online forums with others people who experience the same thing.

These forums and other websites rely on Google to send people to them, but new insights into Google’s AI threaten to disrupt the flow of lucrative Internet traffic.

Google’s competitors have also closely followed the reaction. The search giant has been under pressure for more than a year to offer more AI features, as it competes with OpenAI, the maker of ChatGPT, and with newcomers such as Perplexity AI, which aspires to take on Google with its own AI Q&A app.

“It seems like this was rushed by Google,” said Dmitry Shevelenko, Perplexity’s chief business officer. “There are just a lot of unforced errors in the quality.”

Also read: Google to manufacture Pixel smartphones at Foxconn and Dixon facilities starting September

cnbctv18-forexlive

Back to top button