设计以人为本的人工智能——临床领域案例研究的经验教训

IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS
Tove Helldin, Christian Norrie
{"title":"设计以人为本的人工智能——临床领域案例研究的经验教训","authors":"Tove Helldin,&nbsp;Christian Norrie","doi":"10.1016/j.ijhcs.2025.103623","DOIUrl":null,"url":null,"abstract":"<div><div>AI tools for supporting, or even fully automating, human decision-making have been proposed in a variety of domains, promising faster and better quality of decisions. However, for high-stakes and critical decisions, humans are still required in the decision-making process. Despite the need for human involvement, the research core centers mainly around the technical issues of AI, i.e. how to develop better performing machine learning (ML) models, setting aside the issue of designing, developing, and evaluating AI tools that are to be used in a human-AI context. This focus has led to a lack of experience and guidance of designing and developing AI tools that support their users in a decision-making context, keeping the human in the loop.</div><div>In this paper, we outline our work on designing, developing, and evaluating a transparent AI-based tool to be used by non-AI experts, namely healthcare professionals. The work carried out had two parallel tracks. One focused on testing and implementing a suitable ML technique for sepsis diagnostics based on real patient data and applying explainable AI (XAI) techniques on the results to better enable healthcare professionals to understand and trust the analysis results. The other track included an iterative design process for developing a user-centered, transparent, and trustworthy sepsis diagnostic tool, evaluating whether the generated XAI explanations were fit for purpose. We present the process applied for intertwining these tracks during a common multidisciplinary development process, providing guidance how to conduct a human-centered AI (HCAI) project. We discuss lessons learned, and outline future work for the development of HCAI tools to be used by non-AI experts.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"205 ","pages":"Article 103623"},"PeriodicalIF":5.1000,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Designing for human-centered AI—Lessons learned from a case study in the clinical domain\",\"authors\":\"Tove Helldin,&nbsp;Christian Norrie\",\"doi\":\"10.1016/j.ijhcs.2025.103623\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>AI tools for supporting, or even fully automating, human decision-making have been proposed in a variety of domains, promising faster and better quality of decisions. However, for high-stakes and critical decisions, humans are still required in the decision-making process. Despite the need for human involvement, the research core centers mainly around the technical issues of AI, i.e. how to develop better performing machine learning (ML) models, setting aside the issue of designing, developing, and evaluating AI tools that are to be used in a human-AI context. This focus has led to a lack of experience and guidance of designing and developing AI tools that support their users in a decision-making context, keeping the human in the loop.</div><div>In this paper, we outline our work on designing, developing, and evaluating a transparent AI-based tool to be used by non-AI experts, namely healthcare professionals. The work carried out had two parallel tracks. One focused on testing and implementing a suitable ML technique for sepsis diagnostics based on real patient data and applying explainable AI (XAI) techniques on the results to better enable healthcare professionals to understand and trust the analysis results. The other track included an iterative design process for developing a user-centered, transparent, and trustworthy sepsis diagnostic tool, evaluating whether the generated XAI explanations were fit for purpose. We present the process applied for intertwining these tracks during a common multidisciplinary development process, providing guidance how to conduct a human-centered AI (HCAI) project. We discuss lessons learned, and outline future work for the development of HCAI tools to be used by non-AI experts.</div></div>\",\"PeriodicalId\":54955,\"journal\":{\"name\":\"International Journal of Human-Computer Studies\",\"volume\":\"205 \",\"pages\":\"Article 103623\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2025-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Human-Computer Studies\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1071581925001806\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Human-Computer Studies","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1071581925001806","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0

摘要

用于支持甚至完全自动化人类决策的人工智能工具已经在各种领域提出,承诺更快,更好的决策质量。然而,对于高风险和关键的决策,仍然需要人类参与决策过程。尽管需要人类的参与,但研究核心主要围绕人工智能的技术问题,即如何开发性能更好的机器学习(ML)模型,撇开设计、开发和评估将在人类-人工智能环境中使用的人工智能工具的问题。这种关注导致缺乏设计和开发人工智能工具的经验和指导,这些工具在决策环境中支持用户,使人类处于循环中。在本文中,我们概述了我们在设计、开发和评估非人工智能专家(即医疗保健专业人员)使用的透明的基于人工智能的工具方面的工作。所进行的工作有两条平行的轨道。其中一个重点是测试和实施基于真实患者数据的败血症诊断的合适ML技术,并在结果上应用可解释的AI (XAI)技术,以更好地使医疗保健专业人员理解和信任分析结果。另一条轨道包括开发以用户为中心、透明且值得信赖的败血症诊断工具的迭代设计过程,评估生成的XAI解释是否适合目的。我们提出了在一个共同的多学科开发过程中应用这些轨道交织的过程,为如何开展以人为本的人工智能(HCAI)项目提供指导。我们讨论了经验教训,并概述了未来开发供非人工智能专家使用的HCAI工具的工作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Designing for human-centered AI—Lessons learned from a case study in the clinical domain
AI tools for supporting, or even fully automating, human decision-making have been proposed in a variety of domains, promising faster and better quality of decisions. However, for high-stakes and critical decisions, humans are still required in the decision-making process. Despite the need for human involvement, the research core centers mainly around the technical issues of AI, i.e. how to develop better performing machine learning (ML) models, setting aside the issue of designing, developing, and evaluating AI tools that are to be used in a human-AI context. This focus has led to a lack of experience and guidance of designing and developing AI tools that support their users in a decision-making context, keeping the human in the loop.
In this paper, we outline our work on designing, developing, and evaluating a transparent AI-based tool to be used by non-AI experts, namely healthcare professionals. The work carried out had two parallel tracks. One focused on testing and implementing a suitable ML technique for sepsis diagnostics based on real patient data and applying explainable AI (XAI) techniques on the results to better enable healthcare professionals to understand and trust the analysis results. The other track included an iterative design process for developing a user-centered, transparent, and trustworthy sepsis diagnostic tool, evaluating whether the generated XAI explanations were fit for purpose. We present the process applied for intertwining these tracks during a common multidisciplinary development process, providing guidance how to conduct a human-centered AI (HCAI) project. We discuss lessons learned, and outline future work for the development of HCAI tools to be used by non-AI experts.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Human-Computer Studies
International Journal of Human-Computer Studies 工程技术-计算机:控制论
CiteScore
11.50
自引率
5.60%
发文量
108
审稿时长
3 months
期刊介绍: The International Journal of Human-Computer Studies publishes original research over the whole spectrum of work relevant to the theory and practice of innovative interactive systems. The journal is inherently interdisciplinary, covering research in computing, artificial intelligence, psychology, linguistics, communication, design, engineering, and social organization, which is relevant to the design, analysis, evaluation and application of innovative interactive systems. Papers at the boundaries of these disciplines are especially welcome, as it is our view that interdisciplinary approaches are needed for producing theoretical insights in this complex area and for effective deployment of innovative technologies in concrete user communities. Research areas relevant to the journal include, but are not limited to: • Innovative interaction techniques • Multimodal interaction • Speech interaction • Graphic interaction • Natural language interaction • Interaction in mobile and embedded systems • Interface design and evaluation methodologies • Design and evaluation of innovative interactive systems • User interface prototyping and management systems • Ubiquitous computing • Wearable computers • Pervasive computing • Affective computing • Empirical studies of user behaviour • Empirical studies of programming and software engineering • Computer supported cooperative work • Computer mediated communication • Virtual reality • Mixed and augmented Reality • Intelligent user interfaces • Presence ...
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信