Luke Moffett, Alina Jade Barnett, Jon Donnelly, Fides Regina Schwartz, Hari Trivedi, Joseph Lo, Cynthia Rudin
{"title":"分析乳腺肿块的可解释模型的多位点验证。","authors":"Luke Moffett, Alina Jade Barnett, Jon Donnelly, Fides Regina Schwartz, Hari Trivedi, Joseph Lo, Cynthia Rudin","doi":"10.1371/journal.pone.0320091","DOIUrl":null,"url":null,"abstract":"<p><p>An external validation of IAIA-BL-a deep-learning based, inherently interpretable breast lesion malignancy prediction model-was performed on two patient populations: 207 women ages 31 to 96, (425 mammograms) from iCAD, and 58 women (104 mammograms) from Emory University. This is the first external validation of an inherently interpretable, deep learning-based lesion classification model. IAIA-BL and black-box baseline models had lower mass margin classification performance on the external datasets than the internal dataset as measured by AUC. These losses correlated with a smaller reduction in malignancy classification performance, though AUC 95% confidence intervals overlapped for all sites. However, interpretability, as measured by model activation on relevant portions of the lesion, was maintained across all populations. Together, these results show that model interpretability can generalize even when performance does not.</p>","PeriodicalId":20189,"journal":{"name":"PLoS ONE","volume":"20 6","pages":"e0320091"},"PeriodicalIF":2.6000,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12200715/pdf/","citationCount":"0","resultStr":"{\"title\":\"Multi-site validation of an interpretable model to analyze breast masses.\",\"authors\":\"Luke Moffett, Alina Jade Barnett, Jon Donnelly, Fides Regina Schwartz, Hari Trivedi, Joseph Lo, Cynthia Rudin\",\"doi\":\"10.1371/journal.pone.0320091\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>An external validation of IAIA-BL-a deep-learning based, inherently interpretable breast lesion malignancy prediction model-was performed on two patient populations: 207 women ages 31 to 96, (425 mammograms) from iCAD, and 58 women (104 mammograms) from Emory University. This is the first external validation of an inherently interpretable, deep learning-based lesion classification model. IAIA-BL and black-box baseline models had lower mass margin classification performance on the external datasets than the internal dataset as measured by AUC. These losses correlated with a smaller reduction in malignancy classification performance, though AUC 95% confidence intervals overlapped for all sites. However, interpretability, as measured by model activation on relevant portions of the lesion, was maintained across all populations. Together, these results show that model interpretability can generalize even when performance does not.</p>\",\"PeriodicalId\":20189,\"journal\":{\"name\":\"PLoS ONE\",\"volume\":\"20 6\",\"pages\":\"e0320091\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2025-06-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12200715/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PLoS ONE\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://doi.org/10.1371/journal.pone.0320091\",\"RegionNum\":3,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLoS ONE","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1371/journal.pone.0320091","RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
Multi-site validation of an interpretable model to analyze breast masses.
An external validation of IAIA-BL-a deep-learning based, inherently interpretable breast lesion malignancy prediction model-was performed on two patient populations: 207 women ages 31 to 96, (425 mammograms) from iCAD, and 58 women (104 mammograms) from Emory University. This is the first external validation of an inherently interpretable, deep learning-based lesion classification model. IAIA-BL and black-box baseline models had lower mass margin classification performance on the external datasets than the internal dataset as measured by AUC. These losses correlated with a smaller reduction in malignancy classification performance, though AUC 95% confidence intervals overlapped for all sites. However, interpretability, as measured by model activation on relevant portions of the lesion, was maintained across all populations. Together, these results show that model interpretability can generalize even when performance does not.
期刊介绍:
PLOS ONE is an international, peer-reviewed, open-access, online publication. PLOS ONE welcomes reports on primary research from any scientific discipline. It provides:
* Open-access—freely accessible online, authors retain copyright
* Fast publication times
* Peer review by expert, practicing researchers
* Post-publication tools to indicate quality and impact
* Community-based dialogue on articles
* Worldwide media coverage