Plato Data Intelligence.
Vertical Search & Ai.

Google grilled over AI bot Med-PaLM 2 used in hospitals

Date:

Google is under pressure from a US lawmaker to explain how it trains and deploys its medical chatbot Med-PaLM 2 in hospitals.

Writing to the internet giant today, Senator Mark Warner (D-VA) also urged the web titan to not put patients at risk in a rush to commercialize the technology.

Med-PaLM 2 is based on Google’s large language model PaLM 2, and is fine-tuned on medical information. The system can generate written answers in response to medical queries, summarize documents, and retrieve data. Google introduced the model in April, and said a select group of Google Cloud customers were testing the software.

One of those testers is VHC Health, a hospital in Virginia affiliated with the Mayo Clinic, according to Senator Warner. In a letter to Google chief Sundar Pichai, Warner said he was concerned that generative AI raises “complex new questions and risks” particularly when applied in the healthcare industry.

“While AI undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors,” he wrote [PDF].

“This race to establish market share is readily apparent and especially concerning in the health care industry, given the life-and-death consequences of mistakes in the clinical setting, declines of trust in health care institutions in recent years, and the sensitivity of health information.”

In his letter the senator laid out a dozen sets of questions for Google’s executives to answer. These queries included:

Large language models frequently demonstrate the tendency to memorize contents of their training data, which can risk patient privacy in the context of models trained on sensitive health information. How has Google evaluated Med-PaLM 2 for this risk and what steps has Google taken to mitigate inadvertent privacy leaks of sensitive health information?

What is the frequency with which Google fully or partially re-trains Med-PaLM 2? Does Google ensure that licensees use only the most up-to-date model version?

Does Google ensure that patients are informed when Med-PaLM 2, or other AI models offered or licensed by, are used in their care by health care licensees? If so, how is the disclosure presented? Is it part of a longer disclosure or more clearly presented?

Does Google retain prompt information from health care licensees, including protected health information contained therein? Please list each purpose Google has for retaining that information.

and finally…

In Google’s own research publication announcing Med-PaLM 2, researchers cautioned about the need to adopt “guardrails to mitigate against over-reliance on the output of a medical assistant.” What guardrails has Google adopted to mitigate over reliance on the output of Med-PaLM 2 as well as when it particularly should and should not be used? What guardrails has Google incorporated through product license terms to prevent over-reliance on the output?

All rather good points that ought to be raised or highlighted.

Large language models are prone to generating false information that sounds convincing, so one might well fear a bot confidently handing out harmful medical advice or wrongly influencing someone’s health decisions. The National Eating Disorders Association, for example, took its Tessa chatbot offline after it suggested people count calories, weigh themselves weekly, and monitor body fat – behaviors that are deemed counterintuitive to a healthy recovery.

A Google-DeepMind-authored research paper detailing Med-PaLM 2 admitted the model’s “answers were not as favorable as physician answers,” and scored poorly in terms of accuracy and relevancy.

Warner wants Pichai to share more information about how the model is deployed in clinical settings, and wants to know whether the mega-corp is collecting patient data from those testing its technology, and what data was used to train it. 

He highlighted that Google has previously stored and analyzed patient data without their explicit knowledge or consent in deals with hospitals in the US and UK under the Project Nightingale banner.

“Google has not publicly provided documentation on Med-PaLM 2, including refraining from disclosing the contents of the model’s training data. Does Med-PaLM 2’s training corpus include protected health information?” he asked. 

A spokesperson for Google denied that Med-PaLM 2 was a chatbot as people know them today, and said the model was being tested by customers to explore how it can be useful to the healthcare industry. 

“We believe AI has the potential to transform healthcare and medicine and are committed to exploring with safety, equity, evidence and privacy at the core,” the representative told The Register in a statement. 

“As stated in April, we’re making Med-PaLM 2 available to a select group of healthcare organizations for limited testing, to explore use cases and share feedback – a critical step in building safe and helpful technology. These customers retain control over their data. Med-PaLM 2 is not a chatbot; it is a fine-tuned version of our large language model PaLM 2, and designed to encode medical knowledge.”

The spokesperson did not confirm whether Google would be responding to Senator Warner’s questions. ®

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?