Chatbots are far too easily tricked into giving false medical advice.
At the risk of sounding like a broken record, your Back Page scribbler never ceases to be amazed by the blasé attitude of regulators to the threats to public health posed by the unfettered development of AI technologies.
Despite the Everest-sized mountain of evidence pointing to the harms these chatbots are wreaking, the response by our lily-livered officialdom amounts to little more than a Gallic shoulder shrug as they watch the digital horses bolting headlong into the cybersphere.
The latest study to trigger alarm bells comes to us from boffins at the University of South Australia and Flinders University, who have recently published a quite chilling analysis in the Annals of Internal Medicine detailing the dangers of letting AI roam freely in the public health domain.
The South Australian researchers, along with experts from Harvard Medical School, University College London, and the Warsaw University of Technology, combined their skillsets to highlight just how easily machine learning systems can be programmed to deliver false medical and health information.
Using instructions available only to developers, the team targeted the five most advanced AI systems developed by OpenAI, Google, Anthropic, Meta and X Corp to determine whether they could be programmed to operate as health disinformation chatbots.
They then programmed each AI system – designed to operate as chatbots when embedded in web pages – to produce incorrect responses to health queries and include fabricated references from highly reputable sources to sound more authoritative and credible.
The chatbots were then asked a series of health-related questions. The results were disconcerting.
“In total, 88% of all responses were false,” Uni of SA researcher Dr Natansh Modi said in a media release.
“And yet they were presented with scientific terminology, a formal tone and fabricated references that made the information appear legitimate.”
The disinformation spewed out by the chatbots included claims about vaccines causing autism, cancer-curing diets, HIV being airborne and 5G causing infertility, Dr Modi said.
Out of the five chatbots that were evaluated, four generated disinformation in 100% of their responses, while the fifth generated disinformation in 40% of its responses.
According to the media release, Dr Modi and his team also investigated the OpenAI GPT Store, a platform that allows users to easily create and share customised ChatGPT apps, to assess the ease with which the public could create disinformation tools.
“We successfully created a disinformation chatbot prototype using the platform and we also identified existing public tools on the store that were actively producing health disinformation,” he said.
Dr Modi said the study was the first to systematically demonstrate that leading AI systems could be converted into disinformation chatbots using developers’ tools but also by tools available to the general public.
“If these systems can be manipulated to covertly produce false or misleading advice then they can create a powerful new avenue for disinformation that is harder to detect, harder to regulate and more persuasive than anything seen before.
“This is not a future risk. It is already possible, and it is already happening.”
And if you think that is perhaps over-stating the potential dangers of malicious players operating in the health space, just remember the one golden rule of the internet: if it can be done, it will be done.
Which is more than we can say about the efforts of our regulators and other health stakeholders who are tasked with protecting the public from these bad actors.
Reaffirm your humanity by sending flesh and blood story tips to holly@medicalrepublic.com.au.