Tech

Generative AI is coming for healthcare, and not everyone’s thrilled

Generative AI, which capable of creating and analyzing images, text, audio, video and more, is increasingly making its way into healthcare, driven by both big tech companies and startups .

Google Cloud, the cloud services and products division of Google, is collaborating with Highmark Health, a Pittsburgh-based nonprofit healthcare company, on generative AI tools designed to personalize the patient experience. Amazon’s AWS division says it’s working with anonymous customers on a way to use generative AI to analyze medical databases on “social determinants of health”. And Microsoft Azure is helping build a generative AI system for Providence, the nonprofit health network, to automatically sort messages patients send to care providers.

Leading generative AI startups in healthcare include Ambience Healthcare, which is developing a generative AI application for clinicians; Nabla, an ambient AI assistant for practitioners; and Abridge, which creates analysis tools for medical documentation.

The great enthusiasm for generative AI is reflected in investments in generative AI efforts targeting healthcare. Collectively, generative AI in healthcare startups have raised tens of millions of dollars in venture capital to date, and the vast majority of healthcare investors say generative AI has significantly influenced their investment strategies.

But professionals and patients are split on whether healthcare-focused generative AI is ready for prime time.

Generative AI may not be what people want

In a recent Deloitte survey, only about half (53%) of U.S. consumers said they believed generative AI could improve healthcare, such as making it more accessible or shortening treatment times. waiting for appointments. Less than half said they expect generative AI to make medical care more affordable.

Andrew Borkowski, chief AI officer at the VA Sunshine Healthcare Network, the U.S. Department of Veterans Affairs’ largest health system, doesn’t think this cynicism is unwarranted. Borkowski warned that the deployment of generative AI may be premature due to its “significant” limitations – and concerns about its effectiveness.

“One of the main problems with generative AI is its inability to handle complex medical queries or emergencies,” he told TechCrunch. “Its limited knowledge base – that is, lack of up-to-date clinical information – and lack of human expertise make it unsuitable for providing comprehensive medical advice or treatment recommendations. »

Several studies suggest that these points are credible.

In an article in JAMA Pediatrics, OpenAI’s generative AI chatbot ChatGPT, which some health organizations have tested for limited use cases, was found to make errors in diagnosing pediatric illnesses in 83% of cases. And when testing OpenAI’s GPT-4 as a diagnostic assistant, doctors at Beth Israel Deaconess Medical Center in Boston observed that the model classified the wrong diagnosis as the first response nearly two out of three times.

Today’s generative AI also struggles to handle administrative medical tasks that are an integral part of clinicians’ daily workflows. On the MedAlign benchmark to assess how well generative AI can perform tasks such as summarizing patient health records and searching notes, GPT-4 failed in 35% of cases.

OpenAI and many other generative AI providers warn against relying on their models for medical advice. But Borkowski and others say they could do more. “Relying solely on generative AI for healthcare could lead to misdiagnoses, inappropriate treatments, or even life-threatening situations,” Borkowski said.

Jan Egger, who leads AI-guided therapies at the Institute for AI in Medicine at the University of Duisburg-Essen, which studies applications of emerging technologies for patient care, shares Borkowski’s concerns. He believes that the only safe way to use generative AI in healthcare right now is under the careful and watchful eye of a doctor.

“The results can be completely wrong, and it’s getting harder and harder to stay aware of that,” Egger said. “Of course, generative AI can be used, for example, to pre-write discharge letters. But doctors have the responsibility to check it and make the final decision. »

Generative AI can perpetuate stereotypes

Generative AI in healthcare can cause errors by perpetuating stereotypes.

In a 2023 study by Stanford Medicine, a team of researchers tested ChatGPT and other AI-powered generative chatbots on questions regarding kidney function, lung capacity, and skin thickness. Not only were ChatGPT’s responses often false, the co-authors found, but the responses also included several reinforced long-standing false beliefs that there are biological differences between blacks and whites — untruths that are known to have leads medical providers to misdiagnose health problems.

The irony is that the patients most likely to be discriminated against by generative AI for healthcare are also the ones most likely to use it.

People who don’t have health coverage — people of color, overall, according to a KFF study — are more willing to try generative AI for things like finding a doctor or health support mental, the Deloitte survey showed. If AI recommendations are tainted with bias, this could exacerbate inequalities in treatment.

However, some experts say that generative AI is improving in this regard.

In a Microsoft study published in late 2023, researchers said they achieved 90.2% accuracy on four difficult medical criteria using GPT-4. Vanilla GPT-4 was unable to achieve this score. But, the researchers say, through rapid engineering – designing prompts to make GPT-4 produce certain results – they were able to increase the model’s score by as much as 16.2 percentage points. (It should be noted that Microsoft is a major investor in OpenAI.)

Beyond chatbots

But asking a chatbot a question isn’t the only thing generative AI is useful for. Some researchers say medical imaging could greatly benefit from the power of generative AI.

In July, a group of scientists unveiled a system called ccarryover to clinical workflow (CoDoC), focused on complementarity, in a study published in Nature. The system is designed to determine when medical imaging specialists should rely on AI for diagnostics versus traditional techniques. CoDoC did better than specialists by reducing clinical workflows by 66%, according to the co-authors.

In November, a A Chinese research team in demonstration Pandaan AI model used to detect possible pancreatic lesions in X-rays. One study showed that Panda was very accurate in classifying these lesions, which are often detected too late for surgery.

Indeed, Arun Thirunavukarasu, a clinical researcher at the University of Oxford, said there was “nothing unique” about generative AI that precludes its deployment in healthcare settings.

“More mundane applications of generative AI technology are feasible In in the short to medium term, and will include text correction, automatic documentation of notes and letters, and enhanced search capabilities to optimize electronic patient records,” he said. “There is no reason why generative AI technology – if it is effective – cannot be deployed. In this kind of roles immediately.

“A rigorous science”

But while generative AI shows promise in specific, narrow areas of medicine, experts like Borkowski highlight the technical and compliance hurdles that must be overcome before generative AI can be useful – and reliable – as a as a comprehensive healthcare support tool.

“Significant privacy and security concerns surround the use of generative AI in healthcare,” Borkowski said. “The sensitive nature of medical data and the potential for misuse or unauthorized access pose serious risks to patient confidentiality and trust in the healthcare system. Additionally, the regulatory and legal landscape surrounding the use of generative AI in healthcare continues to evolve, and questions regarding liability, data protection, and the practice of medicine by non-human entities must yet to be resolved.

Even Thirunavukarasu, as optimistic as he is about generative AI in healthcare, says there needs to be “rigorous science” behind patient-facing tools.

“Especially without direct clinician oversight, there should be pragmatic randomized controlled trials demonstrating clinical benefits to justify the deployment of patient-oriented generative AI,” he said. “Good governance going forward is essential to detect any unforeseen harm following large-scale deployment. »

Recently, the World Health Organization released guidelines that advocate for this type of scientific and human oversight of generative AI in healthcare, as well as the introduction of audits, transparency and evaluations. impact on this AI by independent third parties. The aim, explains the WHO in its guidelines, would be to encourage the participation of a diverse cohort of people in the development of generative AI for healthcare and to provide opportunities to voice concerns and contribute throughout the process.

“Until concerns are properly addressed and appropriate safeguards are put in place,” Borkowski said, “the widespread implementation of generative medical AI could be…potentially dangerous for patients and the healthcare industry.” health as a whole.

techcrunch

Back to top button