Shier Nee Saw, Yet Yen Yan, Kwan Hoong Ng
Eur J Radiol . 2024 Dec 6:183:111884. doi: 10.1016/j.ejrad.2024.111884. Online ahead of print.
The inherent "black box" nature of AI algorithms presents a substantial barrier to the widespread adoption of the technology in clinical settings, leading to a lack of trust among users. This review begins by examining the foundational stages involved in the interpretation of medical images by radiologists and clinicians, encompassing both type 1 (fast thinking - ability of the brain to think and act intuitively) and type 2 (slow analytical - slow analytical, laborious approach to decision-making) decision-making processes. The discussion then delves into current Explainable AI (XAI) approaches, exploring both inherent and post-hoc explainability for medical imaging applications and highlighting the milestones achieved. XAI in medicine refers to AI system designed to provide transparent, interpretable, and understandable reasoning behind AI predictions or decisions. Additionally, the paper showcases some commercial AI medical systems that offer explanations through features such as heatmaps. Opportunities, challenges and potential avenues for advancing the field are also addressed. In conclusion, the review observes that state-of-the-art XAI methods are not mature enough for implementation, as the explanations they provide are challenging for medical experts to comprehend. Deeper understanding of the cognitive mechanisms by medical professionals is important in aiming to develop more interpretable XAI methods.