Do explainable AI (XAI) methods improve the acceptance of AI in clinical practice? An evaluation of XAI methods on Gleason grading

IF 3.4 2区 医学 Q1 PATHOLOGY
Robin Manz, Jonas Bäcker, Samantha Cramer, Philip Meyer, Dominik Müller, Anna Muzalyova, Lukas Rentschler, Christoph Wengenmayr, Ludwig Christian Hinske, Ralf Huss, Johannes Raffler, Iñaki Soto-Rey
{"title":"Do explainable AI (XAI) methods improve the acceptance of AI in clinical practice? An evaluation of XAI methods on Gleason grading","authors":"Robin Manz,&nbsp;Jonas Bäcker,&nbsp;Samantha Cramer,&nbsp;Philip Meyer,&nbsp;Dominik Müller,&nbsp;Anna Muzalyova,&nbsp;Lukas Rentschler,&nbsp;Christoph Wengenmayr,&nbsp;Ludwig Christian Hinske,&nbsp;Ralf Huss,&nbsp;Johannes Raffler,&nbsp;Iñaki Soto-Rey","doi":"10.1002/2056-4538.70023","DOIUrl":null,"url":null,"abstract":"<p>This work aimed to evaluate both the usefulness and user acceptance of five gradient-based explainable artificial intelligence (XAI) methods in the use case of a prostate carcinoma clinical decision support system environment. In addition, we aimed to determine whether XAI helps to increase the acceptance of artificial intelligence (AI) and recommend a particular method for this use case. The evaluation was conducted on a tool developed in-house with different visualization approaches to the AI-generated Gleason grade and the corresponding XAI explanations on top of the original slide. The study was a heuristic evaluation of five XAI methods. The participants were 15 pathologists from the University Hospital of Augsburg with a wide range of experience in Gleason grading and AI. The evaluation consisted of a user information form, short questionnaires on each XAI method, a ranking of the methods, and a general questionnaire to evaluate the performance and usefulness of the AI. There were significant differences between the ratings of the methods, with Grad-CAM++ performing best. Both AI decision support and XAI explanations were seen as helpful by the majority of participants. In conclusion, our pilot study suggests that the evaluated XAI methods can indeed improve the usefulness and acceptance of AI. The results obtained are a good indicator, but further studies involving larger sample sizes are warranted to draw more definitive conclusions.</p>","PeriodicalId":48612,"journal":{"name":"Journal of Pathology Clinical Research","volume":"11 2","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/2056-4538.70023","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Pathology Clinical Research","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/2056-4538.70023","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PATHOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

This work aimed to evaluate both the usefulness and user acceptance of five gradient-based explainable artificial intelligence (XAI) methods in the use case of a prostate carcinoma clinical decision support system environment. In addition, we aimed to determine whether XAI helps to increase the acceptance of artificial intelligence (AI) and recommend a particular method for this use case. The evaluation was conducted on a tool developed in-house with different visualization approaches to the AI-generated Gleason grade and the corresponding XAI explanations on top of the original slide. The study was a heuristic evaluation of five XAI methods. The participants were 15 pathologists from the University Hospital of Augsburg with a wide range of experience in Gleason grading and AI. The evaluation consisted of a user information form, short questionnaires on each XAI method, a ranking of the methods, and a general questionnaire to evaluate the performance and usefulness of the AI. There were significant differences between the ratings of the methods, with Grad-CAM++ performing best. Both AI decision support and XAI explanations were seen as helpful by the majority of participants. In conclusion, our pilot study suggests that the evaluated XAI methods can indeed improve the usefulness and acceptance of AI. The results obtained are a good indicator, but further studies involving larger sample sizes are warranted to draw more definitive conclusions.

Abstract Image

可解释的人工智能(XAI)方法是否提高了人工智能在临床实践中的接受度?XAI方法在Gleason分级中的评价
这项工作旨在评估五种基于梯度的可解释人工智能(XAI)方法在前列腺癌临床决策支持系统环境用例中的有用性和用户接受度。此外,我们的目标是确定XAI是否有助于提高人工智能(AI)的接受度,并为该用例推荐一种特定的方法。评估是在内部开发的工具上进行的,该工具采用了不同的可视化方法来显示人工智能生成的Gleason等级,并在原始幻灯片上给出相应的XAI解释。本研究是对五种XAI方法的启发式评价。参与者是来自奥格斯堡大学医院的15名病理学家,他们在格里森分级和人工智能方面有着广泛的经验。评估包括用户信息表、关于每种XAI方法的简短问卷、方法排名以及评估AI性能和有用性的一般问卷。两种方法的评分存在显著差异,其中Grad-CAM++表现最好。大多数参与者认为AI决策支持和XAI解释都是有帮助的。总之,我们的初步研究表明,评估的XAI方法确实可以提高人工智能的有用性和接受度。获得的结果是一个很好的指标,但需要进一步的研究,涉及更大的样本量,以得出更明确的结论。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Pathology Clinical Research
Journal of Pathology Clinical Research Medicine-Pathology and Forensic Medicine
CiteScore
7.40
自引率
2.40%
发文量
47
审稿时长
20 weeks
期刊介绍: The Journal of Pathology: Clinical Research and The Journal of Pathology serve as translational bridges between basic biomedical science and clinical medicine with particular emphasis on, but not restricted to, tissue based studies. The focus of The Journal of Pathology: Clinical Research is the publication of studies that illuminate the clinical relevance of research in the broad area of the study of disease. Appropriately powered and validated studies with novel diagnostic, prognostic and predictive significance, and biomarker discover and validation, will be welcomed. Studies with a predominantly mechanistic basis will be more appropriate for the companion Journal of Pathology.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信