International Coalition of Radiologists Call for Ethics Guidelines on the Use of Artificial Intelligence in Radiology
The healthcare industry is increasingly adopting artificial intelligence (AI), which are powerful computer systems that work and react like humans. AI and other autonomous systems help a number of medical departments and specialties, including administration and radiology, use large amounts of information to reduce errors and improve patient care for all patients.
Artificial intelligence is technology that allows computers and machines to “learn” and function in an intelligent manner. AI combines huge amounts of data with fast computers and intelligent algorithms. In radiology, AI algorithms can go through x-rays and other types of imaging quickly and carefully, and identify subtle patterns or small discrepancies that a human might miss.
AI can provide a number of benefits to medical professionals and patients. One recent study published in the Lancet Digital Health Journal found that AI could detect diseases using medical imaging with the same accuracy as a doctor, for example, but that AI did not outperform clinicians.
The U.S. Food and Drug Administration (FDA) uses established procedures to review and approve medical devices. The FDA can also approve software, such as AI algorithms, used in medical devices. The administration has already approved more than 30 algorithms for use in healthcare. The FDA approved OsteoDetect, for example, which is an AI algorithm that identifies wrist fractures in bone images. GE Healthcare’s AI-embedded X-ray can detect a collapsed lung in an image.
To help more AI reach the patients that could benefit from it, the FDA has recently published a paper that helps software manufacturers gain approval. It does not cover the ethics of AI in radiology, though.
Ethics in Radiology AI
As helpful as AI is in improving patient care, it does carry some risks that could have real-world consequences, especially when it comes to ethics. AI can only “learn” what it is taught – computers do not have ethics unless humans build those ethics into the algorithms.
Most medical schools require students to take the Hippocratic Oath or other pledge that establishes a clear set of ethics. These oaths usually include a variation of Hippocrates’ statement, “Do no harm.” There is currently no such standards for the use of artificial intelligence, and this has some medical professionals worried.
Without a clear code of ethics, AI and autonomous systems could potentially increase errors and worsen existing disparities in healthcare. In other words, any small errors or prejudices in the data or algorithms that the AI system learns could snowball into big errors and disparities that have real world consequences for patients.
As AI is becoming increasingly popular, many medical professionals wonder about its accuracy as compared with the accuracy of a human medical professional. They also wonder about the importance of the human factor as it relates to the information uses when it “learns”.
Furthermore, as artificial intelligence gathers an ever-increasing amount of data, many people are becoming increasingly concerned about the moral problems associated with collecting data on a large scale. This concern, known as data ethics, is becoming increasingly relevant as the quantity of data rapidly expands.
Now, a number of radiology societies in the United States, Canada and Europe are calling for the creation of additional guidelines governing the ethical use of AI in imaging. The groups recently issued a joint statement in the Journal of the American College of Radiology.
Do No Harm
“AI developers ultimately need to be held to the same ‘do no harm’ standard as physicians,” the international coalition warns in their joint statement. “The radiology community should start now to develop codes of ethics and practice for AI.”
The American College of Radiology, Radiology Society of North America, Society for Imaging Informatics in Medicine, American Association of Physicists in Medicine European Society of Radiology, European Society of Medical Imaging Informatics, and Canadian Association of Radiologists published the statement. In it, the organizations voice concerns about how data ethics will translate to AI. The authors of the statement also worry how AI might pick up human biases and prejudices from the datasets they use.
“AI has great potential to increase efficiency and accuracy throughout radiology, but it also carries inherent pitfalls and biases,” the statement said. “Widespread use of AI-based intelligent and autonomous systems in radiology can increase the risk of systemic errors with high consequence and highlights complex ethical and societal issues.”
While the new statement is noteworthy, it will probably not slow down the use of AI in radiology. Currently, more than 120 companies are developing AI software for medical imaging.