Tech

Google uses AI to answer your health questions. Should we trust him?

Do you have a headache or is it a sinus infection? What does a stress fracture look like? Should you be worried about the pain in your chest? If you Google these questions now, the answers could be written by artificial intelligence.

In May, Google rolled out a new feature called AI Overviews that uses generative AI, a type of machine learning technology that is trained on information from the Internet and produces conversational answers to certain search questions by a few seconds.

In the weeks since the tool launched, users have encountered a wide range of inaccuracies and strange responses on a variety of topics. The company later appeared to roll back the feature for some searches in an attempt to minimize these errors.

When it comes to AI’s answers to health questions, experts say the stakes are particularly high. The technology could steer people toward healthier habits or necessary medical care, but it also risks providing inaccurate information. AI can sometimes fabricate facts. And if its responses are shaped by websites that aren’t based in science, it may offer advice that goes against medical advice or poses a risk to a person’s health.

The system has already been shown to produce wrong answers apparently based on faulty sources. When asked “how many stones should I eat”, for example, AI Overviews asked some users to eat at least one stone per day for vitamins and minerals. (The advice was taken from The Onion, a satirical site.)

“You can’t trust everything you read,” said Dr. Karandeep Singh, head of health AI at UC San Diego Health. In health, he says, the source of your information is essential.

Hema Budaraju, senior director of product management at Google who helps lead work on AI presentation, said health searches have “additional guardrails” but declined to describe them in detail. detail. Searches deemed dangerous or explicit, or that indicate a person is in a vulnerable situation, such as self-harm, do not trigger AI summaries, she said.

Google declined to provide a detailed list of websites supporting the information contained in the AI ​​insights, but said the tool worked in conjunction with Google Knowledge Graph, an existing information system that mined billions of facts from hundreds of sources.

The new search responses clarify certain sources; for health issues, these are often websites like the Mayo Clinic, WebMD, the World Health Organization, and the scientific research hub PubMed. But this isn’t an exhaustive list: the tool can also pull information from Wikipedia, blog posts, Reddit, and e-commerce sites. And it doesn’t tell users which facts come from which sources.

With a standard search result, many users would be able to immediately distinguish between a reputable medical website and a candy company. But a single block of text combining information from multiple sources can be confusing.

“And that’s if people are even looking at the source,” said Dr. Seema Yasmin, director of the Stanford Health Communication Initiative, adding: “I don’t know if people are watching, or if we’ve really taught them in a way adequate. She said her own research into misinformation has made her pessimistic about the average user’s interest in looking beyond a quick response.

As for the accuracy of the chocolate answer, Dr. Dariush Mozaffarian, a cardiologist and professor of medicine at Tufts University, said some of the facts were mostly correct and that she was summarizing the research on the benefits of chocolate for health. But it doesn’t distinguish between the strong evidence provided by randomized trials and the weaker evidence from observational studies, he said, and provides no caveats about the evidence.

It’s true that chocolate contains antioxidants, Dr. Mozaffarian said. But the claim that eating chocolate could help prevent memory loss? This has not been clearly proven and “requires many caveats,” he said. Listing these claims next to each other makes some appear more established than they actually are.

Answers may also change as AI itself evolves, even if the science behind a given answer has not changed.

A Google spokesperson said in a statement that the company strives to post disclaimers on answers where they are necessary, including notes that the information should not be treated as a medical advice.

It’s unclear how the AI ​​insights assess the strength of evidence or whether they take into account conflicting research findings, such as those on whether coffee is good for your health. “Science is not a static set of facts,” Dr Yasmin said. She and other experts also questioned whether the tool would rely on older scientific findings that have since been disproven or whether it did not reflect the most recent understanding of an issue.

“Being able to make a critical decision – distinguishing between the quality of sources – that’s what humans do all the time, what clinicians do,” said Dr. Danielle Bitterman, an artificial intelligence physician-scientist. at Dana-Farber Cancer Institute and Brigham. and Women’s Hospital. “They are analyzing the evidence.”

If we want tools like AI Overviews to play this role, she said, “we need to better understand how they would navigate through different sources and how they would apply critical perspective to arrive at a summary,” she said. -she declared.

These unknowns are concerning, experts said, given that the new system elevates AI Overview’s response above individual links to reputable medical websites such as those of the Mayo Clinic and the Cleveland Clinic. These sites have historically risen to the top of many health search results.

A Google spokesperson said AI previews will match or summarize information that appears in top search results, but are not designed to replace that content. According to the spokesperson, it is more about helping people get an idea of ​​the information available.

The Mayo Clinic declined to comment on the new responses. A Cleveland Clinic representative said people seeking health information should “directly seek known and trusted sources” and contact a health care provider if they have symptoms.

A representative for Scripps Health, a California-based health system cited in some AI Overview summaries, said in a statement that “citations in Google’s AI-generated answers could be useful to the extent that they establish Scripps Health as a trusted source of health information.”

However, the representative added, “we are concerned that we cannot vouch for content produced through AI in the same way that we can for our own content, which is reviewed by our healthcare professionals “.

For medical questions, it’s not just the accuracy of an answer that matters, but also how it is presented to users, experts said. Answer the question “Am I having a heart attack?” The AI ​​response presented a helpful summary of symptoms, said Dr. Richard Gumina, director of cardiovascular medicine at The Ohio State University Wexner Medical Center.

But, he added, he had to read a long list of symptoms before the text advised him to call 911. Dr. Gumina also searched “Am I having a stroke?” ” to see if the tool could produce a more urgent response – which it did, telling users at the first line to call 911. He said he would immediately advise patients with symptoms of a heart attack or stroke to call for help.

Experts encouraged people seeking health information to approach the new answers with caution. Essentially, they said, users should take note of the fine print under some AI preview responses: “This is for informational purposes only. For medical advice or diagnosis, consult a professional. Generative AI is experimental.

Daniel Blum reports contributed.

News Source : www.nytimes.com
Gn tech

Back to top button