MiMICRI:面向以领域为中心的心血管图像分类模型的反事实解释。

Grace Guo, Lifu Deng, Animesh Tandon, Alex Endert, Bum Chul Kwon
{"title":"MiMICRI:面向以领域为中心的心血管图像分类模型的反事实解释。","authors":"Grace Guo, Lifu Deng, Animesh Tandon, Alex Endert, Bum Chul Kwon","doi":"10.1145/3630106.3659011","DOIUrl":null,"url":null,"abstract":"<p><p>The recent prevalence of publicly accessible, large medical imaging datasets has led to a proliferation of artificial intelligence (AI) models for cardiovascular image classification and analysis. At the same time, the potentially significant impacts of these models have motivated the development of a range of explainable AI (XAI) methods that aim to explain model predictions given certain image inputs. However, many of these methods are not developed or evaluated with domain experts, and explanations are not contextualized in terms of medical expertise or domain knowledge. In this paper, we propose a novel framework and python library, MiMICRI, that provides domain-centered counterfactual explanations of cardiovascular image classification models. MiMICRI helps users interactively select and replace segments of medical images that correspond to morphological structures. From the counterfactuals generated, users can then assess the influence of each segment on model predictions, and validate the model against known medical facts. We evaluate this library with two medical experts. Our evaluation demonstrates that a domain-centered XAI approach can enhance the interpretability of model explanations, and help experts reason about models in terms of relevant domain knowledge. However, concerns were also surfaced about the clinical plausibility of the counterfactuals generated. We conclude with a discussion on the generalizability and trustworthiness of the MiMICRI framework, as well as the implications of our findings on the development of domain-centered XAI methods for model interpretability in healthcare contexts.</p>","PeriodicalId":520401,"journal":{"name":"FAccT '24 : proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT '24) : June 3rd-6th 2024, Rio de Janeiro, Brazil. ACM Conference on Fairness, Accountability, and Transparency (2024 : Rio de Ja...","volume":"2024 ","pages":"1861-1874"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11774553/pdf/","citationCount":"0","resultStr":"{\"title\":\"MiMICRI: Towards Domain-centered Counterfactual Explanations of Cardiovascular Image Classification Models.\",\"authors\":\"Grace Guo, Lifu Deng, Animesh Tandon, Alex Endert, Bum Chul Kwon\",\"doi\":\"10.1145/3630106.3659011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The recent prevalence of publicly accessible, large medical imaging datasets has led to a proliferation of artificial intelligence (AI) models for cardiovascular image classification and analysis. At the same time, the potentially significant impacts of these models have motivated the development of a range of explainable AI (XAI) methods that aim to explain model predictions given certain image inputs. However, many of these methods are not developed or evaluated with domain experts, and explanations are not contextualized in terms of medical expertise or domain knowledge. In this paper, we propose a novel framework and python library, MiMICRI, that provides domain-centered counterfactual explanations of cardiovascular image classification models. MiMICRI helps users interactively select and replace segments of medical images that correspond to morphological structures. From the counterfactuals generated, users can then assess the influence of each segment on model predictions, and validate the model against known medical facts. We evaluate this library with two medical experts. Our evaluation demonstrates that a domain-centered XAI approach can enhance the interpretability of model explanations, and help experts reason about models in terms of relevant domain knowledge. However, concerns were also surfaced about the clinical plausibility of the counterfactuals generated. We conclude with a discussion on the generalizability and trustworthiness of the MiMICRI framework, as well as the implications of our findings on the development of domain-centered XAI methods for model interpretability in healthcare contexts.</p>\",\"PeriodicalId\":520401,\"journal\":{\"name\":\"FAccT '24 : proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT '24) : June 3rd-6th 2024, Rio de Janeiro, Brazil. ACM Conference on Fairness, Accountability, and Transparency (2024 : Rio de Ja...\",\"volume\":\"2024 \",\"pages\":\"1861-1874\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11774553/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"FAccT '24 : proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT '24) : June 3rd-6th 2024, Rio de Janeiro, Brazil. ACM Conference on Fairness, Accountability, and Transparency (2024 : Rio de Ja...\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3630106.3659011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/6/5 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"FAccT '24 : proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT '24) : June 3rd-6th 2024, Rio de Janeiro, Brazil. ACM Conference on Fairness, Accountability, and Transparency (2024 : Rio de Ja...","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3630106.3659011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/6/5 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

最近,可公开访问的大型医学成像数据集的流行,导致了用于心血管图像分类和分析的人工智能(AI)模型的激增。与此同时,这些模型的潜在重大影响推动了一系列可解释人工智能(XAI)方法的发展,这些方法旨在解释给定某些图像输入的模型预测。然而,这些方法中的许多都不是由领域专家开发或评估的,并且解释也没有根据医学专业知识或领域知识进行背景化。在本文中,我们提出了一个新的框架和python库MiMICRI,它提供了以领域为中心的心血管图像分类模型的反事实解释。MiMICRI帮助用户交互式地选择和替换与形态结构相对应的医学图像片段。根据生成的反事实,用户可以评估每个部分对模型预测的影响,并根据已知的医学事实验证模型。我们和两位医学专家一起评估了这个图书馆。我们的评估表明,以领域为中心的XAI方法可以增强模型解释的可解释性,并帮助专家根据相关的领域知识对模型进行推理。然而,也有人担心产生的反事实的临床合理性。最后,我们讨论了MiMICRI框架的通用性和可信度,以及我们的发现对医疗保健环境中以领域为中心的XAI方法的模型可解释性的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
MiMICRI: Towards Domain-centered Counterfactual Explanations of Cardiovascular Image Classification Models.

The recent prevalence of publicly accessible, large medical imaging datasets has led to a proliferation of artificial intelligence (AI) models for cardiovascular image classification and analysis. At the same time, the potentially significant impacts of these models have motivated the development of a range of explainable AI (XAI) methods that aim to explain model predictions given certain image inputs. However, many of these methods are not developed or evaluated with domain experts, and explanations are not contextualized in terms of medical expertise or domain knowledge. In this paper, we propose a novel framework and python library, MiMICRI, that provides domain-centered counterfactual explanations of cardiovascular image classification models. MiMICRI helps users interactively select and replace segments of medical images that correspond to morphological structures. From the counterfactuals generated, users can then assess the influence of each segment on model predictions, and validate the model against known medical facts. We evaluate this library with two medical experts. Our evaluation demonstrates that a domain-centered XAI approach can enhance the interpretability of model explanations, and help experts reason about models in terms of relevant domain knowledge. However, concerns were also surfaced about the clinical plausibility of the counterfactuals generated. We conclude with a discussion on the generalizability and trustworthiness of the MiMICRI framework, as well as the implications of our findings on the development of domain-centered XAI methods for model interpretability in healthcare contexts.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信