{"title":"设计以人为本的人工智能——临床领域案例研究的经验教训","authors":"Tove Helldin, Christian Norrie","doi":"10.1016/j.ijhcs.2025.103623","DOIUrl":null,"url":null,"abstract":"<div><div>AI tools for supporting, or even fully automating, human decision-making have been proposed in a variety of domains, promising faster and better quality of decisions. However, for high-stakes and critical decisions, humans are still required in the decision-making process. Despite the need for human involvement, the research core centers mainly around the technical issues of AI, i.e. how to develop better performing machine learning (ML) models, setting aside the issue of designing, developing, and evaluating AI tools that are to be used in a human-AI context. This focus has led to a lack of experience and guidance of designing and developing AI tools that support their users in a decision-making context, keeping the human in the loop.</div><div>In this paper, we outline our work on designing, developing, and evaluating a transparent AI-based tool to be used by non-AI experts, namely healthcare professionals. The work carried out had two parallel tracks. One focused on testing and implementing a suitable ML technique for sepsis diagnostics based on real patient data and applying explainable AI (XAI) techniques on the results to better enable healthcare professionals to understand and trust the analysis results. The other track included an iterative design process for developing a user-centered, transparent, and trustworthy sepsis diagnostic tool, evaluating whether the generated XAI explanations were fit for purpose. We present the process applied for intertwining these tracks during a common multidisciplinary development process, providing guidance how to conduct a human-centered AI (HCAI) project. We discuss lessons learned, and outline future work for the development of HCAI tools to be used by non-AI experts.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"205 ","pages":"Article 103623"},"PeriodicalIF":5.1000,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Designing for human-centered AI—Lessons learned from a case study in the clinical domain\",\"authors\":\"Tove Helldin, Christian Norrie\",\"doi\":\"10.1016/j.ijhcs.2025.103623\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>AI tools for supporting, or even fully automating, human decision-making have been proposed in a variety of domains, promising faster and better quality of decisions. However, for high-stakes and critical decisions, humans are still required in the decision-making process. Despite the need for human involvement, the research core centers mainly around the technical issues of AI, i.e. how to develop better performing machine learning (ML) models, setting aside the issue of designing, developing, and evaluating AI tools that are to be used in a human-AI context. This focus has led to a lack of experience and guidance of designing and developing AI tools that support their users in a decision-making context, keeping the human in the loop.</div><div>In this paper, we outline our work on designing, developing, and evaluating a transparent AI-based tool to be used by non-AI experts, namely healthcare professionals. The work carried out had two parallel tracks. One focused on testing and implementing a suitable ML technique for sepsis diagnostics based on real patient data and applying explainable AI (XAI) techniques on the results to better enable healthcare professionals to understand and trust the analysis results. The other track included an iterative design process for developing a user-centered, transparent, and trustworthy sepsis diagnostic tool, evaluating whether the generated XAI explanations were fit for purpose. We present the process applied for intertwining these tracks during a common multidisciplinary development process, providing guidance how to conduct a human-centered AI (HCAI) project. We discuss lessons learned, and outline future work for the development of HCAI tools to be used by non-AI experts.</div></div>\",\"PeriodicalId\":54955,\"journal\":{\"name\":\"International Journal of Human-Computer Studies\",\"volume\":\"205 \",\"pages\":\"Article 103623\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2025-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Human-Computer Studies\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1071581925001806\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Human-Computer Studies","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1071581925001806","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
Designing for human-centered AI—Lessons learned from a case study in the clinical domain
AI tools for supporting, or even fully automating, human decision-making have been proposed in a variety of domains, promising faster and better quality of decisions. However, for high-stakes and critical decisions, humans are still required in the decision-making process. Despite the need for human involvement, the research core centers mainly around the technical issues of AI, i.e. how to develop better performing machine learning (ML) models, setting aside the issue of designing, developing, and evaluating AI tools that are to be used in a human-AI context. This focus has led to a lack of experience and guidance of designing and developing AI tools that support their users in a decision-making context, keeping the human in the loop.
In this paper, we outline our work on designing, developing, and evaluating a transparent AI-based tool to be used by non-AI experts, namely healthcare professionals. The work carried out had two parallel tracks. One focused on testing and implementing a suitable ML technique for sepsis diagnostics based on real patient data and applying explainable AI (XAI) techniques on the results to better enable healthcare professionals to understand and trust the analysis results. The other track included an iterative design process for developing a user-centered, transparent, and trustworthy sepsis diagnostic tool, evaluating whether the generated XAI explanations were fit for purpose. We present the process applied for intertwining these tracks during a common multidisciplinary development process, providing guidance how to conduct a human-centered AI (HCAI) project. We discuss lessons learned, and outline future work for the development of HCAI tools to be used by non-AI experts.
期刊介绍:
The International Journal of Human-Computer Studies publishes original research over the whole spectrum of work relevant to the theory and practice of innovative interactive systems. The journal is inherently interdisciplinary, covering research in computing, artificial intelligence, psychology, linguistics, communication, design, engineering, and social organization, which is relevant to the design, analysis, evaluation and application of innovative interactive systems. Papers at the boundaries of these disciplines are especially welcome, as it is our view that interdisciplinary approaches are needed for producing theoretical insights in this complex area and for effective deployment of innovative technologies in concrete user communities.
Research areas relevant to the journal include, but are not limited to:
• Innovative interaction techniques
• Multimodal interaction
• Speech interaction
• Graphic interaction
• Natural language interaction
• Interaction in mobile and embedded systems
• Interface design and evaluation methodologies
• Design and evaluation of innovative interactive systems
• User interface prototyping and management systems
• Ubiquitous computing
• Wearable computers
• Pervasive computing
• Affective computing
• Empirical studies of user behaviour
• Empirical studies of programming and software engineering
• Computer supported cooperative work
• Computer mediated communication
• Virtual reality
• Mixed and augmented Reality
• Intelligent user interfaces
• Presence
...