RANZCR member Dr Lauren Oakden Rayner spoke to the Sydney Morning Herald about AI application in radiology.
A computer-assisted needle misses its target, puncturing the spine. A diabetic patient goes rapidly downhill after a computer recommends an incorrect insulin dosage. An ultrasound fails to diagnose an obvious heart condition that is ultimately fatal. These are just a few examples of incidents reported to the United States’ Food and Drug Administration involving health technology assisted by artificial intelligence (AI), and Australian researchers say they are an “early warning sign” of what could happen if regulators, hospitals and patients don’t take safety seriously in the rapidly evolving field.
Radiology is at the forefront of the rapid adoption of AI in healthcare, especially in breast cancer screening and analysis of chest X-rays. “A couple of years ago, almost no radiologist would say they use it, now a fair percentage would say that they use it in their daily work,” said clinical radiologist and AI safety researcher Dr Lauren Oakden Rayner. The Royal Australian and New Zealand College of Radiologists said last year that “current regulatory mechanisms have not evolved alongside recent breakthroughs in AI technology, and may no longer be fit for purpose”.
Rayner, a member of the college, said the technology had many potential benefits, but Australian regulators and clinicians needed to better understand the risks of fully autonomous systems before putting them into hospitals, clinics and homes. “Humans are legally and morally responsible for decision-making, and it’s taking some of that out of human hands,” she said. “There’s no reason autonomous AI systems can’t exist... but they obviously have to be tested very, very tightly.”
View the full article