In a recent interview, Dr Karen DeSalvo, Google’s Chief Health Officer, stated that the company’s AI platform, Bard, is a “tool in a toolbox”, and not a replacement for doctors.
The use cases for Artificial Intelligence in our daily lives are ever growing and there’s no area in greater need for increased resources than medicine. Google wants to put a doctor in everyone’s pocket in the not too distant future with technology underpinned by Bard, Google’s own Large Language Model chatbot.
However, there are concerns that AI could be used to make inaccurate or biased decisions, which could have serious or life-threatening consequences for patients.
Could Doctors be Replaced by AI?
In short, no. A heated topic of discussion that never fails to come up when talking about AI, is whether it will make X or Y professions obsolete. For the foreseeable future at least, the consensus is clear that doctors, GPs, radiographers and other medical professions are still very much required.
Dr Karen DeSalvo, Google’s Chief Health Officer and health policy leader having worked in the Obama administration, spoke to the Guardian Australia on a visit to the country recently.
DeSalvo broached the topic, stating “I have to say as a doc sometimes: ‘Oh my, there’s this new stethoscope in my toolbox called a large language model, and it’s going to do a lot of amazing things.’ But it’s not going to replace doctors – I believe it’s a tool in the toolbox.”
One big question medical professionals have is, can it say “I don’t know” when faced with a question it does not have the knowledge to answer. She continues: “I’m in the camp of, there’s potential here and we should be bold as we’re thinking about what the potential uses could be to help people around the world.”
But it should never replace humans in the diagnosis and treatment of patients, she said, indicating there would be concerns about the potential for misdiagnosis, with early LLMs prone to what has been called “AI hallucinations”, making up source material to fit the response required.
Over 90% Accuracy Score According to One Study
The second iteration of Google’s Med-PaLM sought to fix some of its predecessor’s shortcomings by exposing it to data from non-English speaking countries, and helping it to overcome biases relating to ethical and social issues.
In a recent Google research study published in Nature, the study found that 92.9% of the time the Med-PaLM system generated answers on par with answers from clinicians.
Answers rated as potentially leading to harmful outcomes were produced by the Med-PaLM 5.8% of the time with further evaluation ongoing. This small but substantial enough portion of harmful responses is enough to indicate that the tool is very much in its early stages with DeSalvo labeling it as in the “test and learn phase”.
DeSalvo’s overall outlook was positive though, she explained how the tool could begin to readdress the balance of power and information between the medical industry and the public. People should be aware of the risks with this new technology but see it as an opportunity to take their health into their own hands.