Andrea Ciardiello, Anna D’Angelo, Luigi De Angelis, Stefano Giagu, Evis Sala, Guido Gigante
{"title":"Beyond the black box: lessons in explainability from AI in mammography","authors":"Andrea Ciardiello, Anna D’Angelo, Luigi De Angelis, Stefano Giagu, Evis Sala, Guido Gigante","doi":"10.1007/s10462-026-11518-5","DOIUrl":null,"url":null,"abstract":"<div><p>With AI already in clinical use, mammography serves as a critical test-bed for the challenges and potential of medical AI. However, its progress is hampered by the ‘black box’ nature of current AI algorithms, limiting clinician trust and transparency. This review analyses the field of Explainable AI (XAI) as a solution, examining its motivations, methods, and metrics. We find the field is dominated by post-hoc saliency methods that provide plausible but not necessarily faithful explanations of AI decision-making. This focus has led to an evaluation gap, where localization accuracy is used as a proxy for explanatory quality without verifying the model’s true reasoning. Inherently interpretable models that could offer more faithful insights are rarely implemented, and a lack of human-centred studies further obscures the clinical utility of current XAI techniques. We argue that for AI in mammography to realize its full potential, the field must urgently shift focus from creating plausible explanations to developing and validating inherently interpretable systems that provide faithful, clinically meaningful insights.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 5","pages":""},"PeriodicalIF":13.9000,"publicationDate":"2026-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-026-11518-5.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-026-11518-5","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/4/6 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
With AI already in clinical use, mammography serves as a critical test-bed for the challenges and potential of medical AI. However, its progress is hampered by the ‘black box’ nature of current AI algorithms, limiting clinician trust and transparency. This review analyses the field of Explainable AI (XAI) as a solution, examining its motivations, methods, and metrics. We find the field is dominated by post-hoc saliency methods that provide plausible but not necessarily faithful explanations of AI decision-making. This focus has led to an evaluation gap, where localization accuracy is used as a proxy for explanatory quality without verifying the model’s true reasoning. Inherently interpretable models that could offer more faithful insights are rarely implemented, and a lack of human-centred studies further obscures the clinical utility of current XAI techniques. We argue that for AI in mammography to realize its full potential, the field must urgently shift focus from creating plausible explanations to developing and validating inherently interpretable systems that provide faithful, clinically meaningful insights.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.