临床决策中以人为中心的可解释性评价:文献综述。

IF 4.6 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Jenny M Bauer, Martin Michalowski
{"title":"临床决策中以人为中心的可解释性评价:文献综述。","authors":"Jenny M Bauer, Martin Michalowski","doi":"10.1093/jamia/ocaf110","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>This review paper comprehensively summarizes healthcare provider (HCP) evaluation of explanations produced by explainable artificial intelligence methods to support point-of-care, patient-specific, clinical decision-making (CDM) within medical settings. It highlights the critical need to incorporate human-centered (HCP) evaluation approaches based on their CDM needs, processes, and goals.</p><p><strong>Materials and methods: </strong>The review was conducted in Ovid Medline and Scopus databases, following the Institute of Medicine's methodological standards and PRISMA guidelines. An individual study appraisal was conducted using design-specific appraisal tools. MaxQDA software was used for data extraction and evidence table procedures.</p><p><strong>Results: </strong>Of the 2673 unique records retrieved, 25 records were included in the final sample. Studies were excluded if they did not meet this review's definitions of HCP evaluation (1156), healthcare use (995), explainable AI (211), and primary research (285), and if they were not available in English (1). The sample focused primarily on physicians and diagnostic imaging use cases and revealed wide-ranging evaluation measures.</p><p><strong>Discussion: </strong>The synthesis of sampled studies suggests a potential common measure of clinical explainability with 3 indicators of interpretability, fidelity, and clinical value. There is an opportunity to extend the current model-centered evaluation approaches to incorporate human-centered metrics, supporting the transition into practice.</p><p><strong>Conclusion: </strong>Future research should aim to clarify and expand key concepts in HCP evaluation, propose a comprehensive evaluation model positioned in current theoretical knowledge, and develop a valid instrument to support comparisons.</p>","PeriodicalId":50016,"journal":{"name":"Journal of the American Medical Informatics Association","volume":" ","pages":""},"PeriodicalIF":4.6000,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Human-centered explainability evaluation in clinical decision-making: a critical review of the literature.\",\"authors\":\"Jenny M Bauer, Martin Michalowski\",\"doi\":\"10.1093/jamia/ocaf110\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>This review paper comprehensively summarizes healthcare provider (HCP) evaluation of explanations produced by explainable artificial intelligence methods to support point-of-care, patient-specific, clinical decision-making (CDM) within medical settings. It highlights the critical need to incorporate human-centered (HCP) evaluation approaches based on their CDM needs, processes, and goals.</p><p><strong>Materials and methods: </strong>The review was conducted in Ovid Medline and Scopus databases, following the Institute of Medicine's methodological standards and PRISMA guidelines. An individual study appraisal was conducted using design-specific appraisal tools. MaxQDA software was used for data extraction and evidence table procedures.</p><p><strong>Results: </strong>Of the 2673 unique records retrieved, 25 records were included in the final sample. Studies were excluded if they did not meet this review's definitions of HCP evaluation (1156), healthcare use (995), explainable AI (211), and primary research (285), and if they were not available in English (1). The sample focused primarily on physicians and diagnostic imaging use cases and revealed wide-ranging evaluation measures.</p><p><strong>Discussion: </strong>The synthesis of sampled studies suggests a potential common measure of clinical explainability with 3 indicators of interpretability, fidelity, and clinical value. There is an opportunity to extend the current model-centered evaluation approaches to incorporate human-centered metrics, supporting the transition into practice.</p><p><strong>Conclusion: </strong>Future research should aim to clarify and expand key concepts in HCP evaluation, propose a comprehensive evaluation model positioned in current theoretical knowledge, and develop a valid instrument to support comparisons.</p>\",\"PeriodicalId\":50016,\"journal\":{\"name\":\"Journal of the American Medical Informatics Association\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2025-07-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of the American Medical Informatics Association\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.1093/jamia/ocaf110\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the American Medical Informatics Association","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1093/jamia/ocaf110","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

目的:这篇综述文章全面总结了医疗保健提供者(HCP)对可解释的人工智能方法产生的解释的评估,以支持医疗环境中护理点、患者特异性、临床决策(CDM)。它强调了根据CDM需求、过程和目标合并以人为中心(HCP)评估方法的关键需求。材料和方法:本综述在Ovid Medline和Scopus数据库中进行,遵循医学研究所的方法标准和PRISMA指南。使用特定于设计的评估工具进行个体研究评估。使用MaxQDA软件进行数据提取和证据表程序。结果:在检索到的2673条唯一记录中,最终样本中包含了25条记录。如果研究不符合本综述对HCP评价(1156)、医疗用途(995)、可解释人工智能(211)和主要研究(285)的定义,并且没有英文版本(1),则将其排除。样本主要集中在医生和诊断成像用例上,并揭示了广泛的评估措施。讨论:对抽样研究的综合提出了一种潜在的临床可解释性的通用测量方法,包括可解释性、保真度和临床价值三个指标。有机会扩展当前以模型为中心的评估方法,以合并以人为中心的度量标准,支持转换到实践中。结论:未来的研究应旨在澄清和拓展HCP评价的关键概念,提出定位于现有理论知识的综合评价模型,并开发有效的工具来支持比较。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Human-centered explainability evaluation in clinical decision-making: a critical review of the literature.

Objectives: This review paper comprehensively summarizes healthcare provider (HCP) evaluation of explanations produced by explainable artificial intelligence methods to support point-of-care, patient-specific, clinical decision-making (CDM) within medical settings. It highlights the critical need to incorporate human-centered (HCP) evaluation approaches based on their CDM needs, processes, and goals.

Materials and methods: The review was conducted in Ovid Medline and Scopus databases, following the Institute of Medicine's methodological standards and PRISMA guidelines. An individual study appraisal was conducted using design-specific appraisal tools. MaxQDA software was used for data extraction and evidence table procedures.

Results: Of the 2673 unique records retrieved, 25 records were included in the final sample. Studies were excluded if they did not meet this review's definitions of HCP evaluation (1156), healthcare use (995), explainable AI (211), and primary research (285), and if they were not available in English (1). The sample focused primarily on physicians and diagnostic imaging use cases and revealed wide-ranging evaluation measures.

Discussion: The synthesis of sampled studies suggests a potential common measure of clinical explainability with 3 indicators of interpretability, fidelity, and clinical value. There is an opportunity to extend the current model-centered evaluation approaches to incorporate human-centered metrics, supporting the transition into practice.

Conclusion: Future research should aim to clarify and expand key concepts in HCP evaluation, propose a comprehensive evaluation model positioned in current theoretical knowledge, and develop a valid instrument to support comparisons.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of the American Medical Informatics Association
Journal of the American Medical Informatics Association 医学-计算机:跨学科应用
CiteScore
14.50
自引率
7.80%
发文量
230
审稿时长
3-8 weeks
期刊介绍: JAMIA is AMIA''s premier peer-reviewed journal for biomedical and health informatics. Covering the full spectrum of activities in the field, JAMIA includes informatics articles in the areas of clinical care, clinical research, translational science, implementation science, imaging, education, consumer health, public health, and policy. JAMIA''s articles describe innovative informatics research and systems that help to advance biomedical science and to promote health. Case reports, perspectives and reviews also help readers stay connected with the most important informatics developments in implementation, policy and education.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信