While debate rumbles about how generative artificial intelligence will change jobs, AI is already altering health care. AI systems are being used for everything from drug discovery to diagnostic tasks in radiology and clinical note-taking. A recent survey of 2,206 clinicians found that most are optimistic about AI’s potential to make health care more efficient and accurate, and nearly half of respondents have used AI tools for work.
Yet AI remains plagued with bugs, hallucinations, privacy concerns and other ethical quandaries, so deploying it for sensitive and consequential work comes with major risks. In a review article published Sept. 9 in Nature Reviews Bioengineering, University of Washington researchers argue that a key standard for deploying medical AI is transparency — that is, using various methods to clarify how a medical AI system arrives at its diagnoses and outputs.
UW News spoke with the paper’s three authors about what transparency means for medical AI: co-lead authors Chanwoo Kim and Soham Gadgil, both UW doctoral students in the Paul G. Allen School of Computer Science & Engineering, and senior author Su-In Lee, a professor in the Allen School.
