Using Explainable AI to Characterize Features in the Mirai Mammographic Breast Cancer Risk Prediction Model
Yao-Kuan Wang, Zan Klanecek, Tobias Wagner, Lesley Cockmartin, Nicholas Marshall, Andrej Studen, Robert Jeraj, Hilde Bosmans
Radiol Artif Intell. 2025 Nov;7(6):e240417. doi: 10.1148/ryai.240417.
Abstract
Purpose To evaluate whether features extracted by Mirai can be aligned with mammographic observations and contribute meaningfully to the prediction of breast cancer risk. Materials and Methods This retrospective study examined the correlation of 512 Mirai features with mammographic observations in terms of receptive field and anatomic location. A total of 29 374 screening examinations with mammograms (10 415 female patients; mean age at examination, 60 years � 11 [SD]) from the EMory BrEast imaging Dataset (EMBED) (2013-2020) were used to evaluate feature importance using a feature-centric explainable artificial intelligence pipeline. Risk prediction was evaluated using only calcification features (CalcMirai) or mass features (MassMirai) against Mirai. Performance was assessed in screening and screen-negative (time to cancer, >6 months) populations using the area under the receiver operating characteristic curve (AUC). Results Eighteen calcification features and 18 mass features were selected for CalcMirai and MassMirai, respectively. Both CalcMirai and MassMirai had lower performance than Mirai in lesion detection (screening population: Mirai 1-year AUC, 0.81 [95% CI: 0.78, 0.84]; CalcMirai 1-year AUC, 0.76 [95% CI: 0.73, 0.80]; MassMirai 1-year AUC, 0.74 [95% CI: 0.71, 0.78] [P < .001]). In risk prediction, there was no evidence of a difference in performance between CalcMirai and Mirai (screen-negative population: Mirai 5-year AUC, 0.66 [95% CI: 0.63, 0.69]; CalcMirai 5-year AUC, 0.66 [95% CI: 0.64, 0.69] [P = .71]). However, MassMirai achieved lower performance than Mirai (5-year AUC, 0.57 [95% CI: 0.54, 0.60]; P < .001). Radiologist review of calcification features confirmed Mirai's use of benign calcification in risk prediction. Conclusion The explainable AI pipeline demonstrated that Mirai implicitly learned to identify mammographic lesion features, particularly calcifications, for lesion detection and risk prediction.