Human centred explainable AI decision-making in healthcare

Catharina M. van Leersum , Clara Maathuis
{"title":"Human centred explainable AI decision-making in healthcare","authors":"Catharina M. van Leersum ,&nbsp;Clara Maathuis","doi":"10.1016/j.jrt.2025.100108","DOIUrl":null,"url":null,"abstract":"<div><div>Human-centred AI (HCAI<span><span><sup>1</sup></span></span>) implies building AI systems in a manner that comprehends human aims, needs, and expectations by assisting, interacting, and collaborating with humans. Further focusing on <em>explainable AI</em> (XAI<span><span><sup>2</sup></span></span>) allows to gather insight in the data, reasoning, and decisions made by the AI systems facilitating human understanding, trust, and contributing to identifying issues like errors and bias. While current XAI approaches mainly have a technical focus, to be able to understand the context and human dynamics, a transdisciplinary perspective and a socio-technical approach is necessary. This fact is critical in the healthcare domain as various risks could imply serious consequences on both the safety of human life and medical devices.</div><div>A reflective ethical and socio-technical perspective, where technical advancements and human factors co-evolve, is called <em>human-centred explainable AI</em> (HCXAI<span><span><sup>3</sup></span></span>). This perspective sets humans at the centre of AI design with a holistic understanding of values, interpersonal dynamics, and the socially situated nature of AI systems. In the healthcare domain, to the best of our knowledge, limited knowledge exists on applying HCXAI, the ethical risks are unknown, and it is unclear which explainability elements are needed in decision-making to closely mimic human decision-making. Moreover, different stakeholders have different explanation needs, thus HCXAI could be a solution to focus on humane ethical decision-making instead of pure technical choices.</div><div>To tackle this knowledge gap, this article aims to design an actionable HCXAI ethical framework adopting a transdisciplinary approach that merges academic and practitioner knowledge and expertise from the AI, XAI, HCXAI, design science, and healthcare domains. To demonstrate the applicability of the proposed actionable framework in real scenarios and settings while reflecting on human decision-making, two use cases are considered. The first one is on AI-based interpretation of MRI scans and the second one on the application of smart flooring.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100108"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of responsible technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666659625000046","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Human-centred AI (HCAI1) implies building AI systems in a manner that comprehends human aims, needs, and expectations by assisting, interacting, and collaborating with humans. Further focusing on explainable AI (XAI2) allows to gather insight in the data, reasoning, and decisions made by the AI systems facilitating human understanding, trust, and contributing to identifying issues like errors and bias. While current XAI approaches mainly have a technical focus, to be able to understand the context and human dynamics, a transdisciplinary perspective and a socio-technical approach is necessary. This fact is critical in the healthcare domain as various risks could imply serious consequences on both the safety of human life and medical devices.
A reflective ethical and socio-technical perspective, where technical advancements and human factors co-evolve, is called human-centred explainable AI (HCXAI3). This perspective sets humans at the centre of AI design with a holistic understanding of values, interpersonal dynamics, and the socially situated nature of AI systems. In the healthcare domain, to the best of our knowledge, limited knowledge exists on applying HCXAI, the ethical risks are unknown, and it is unclear which explainability elements are needed in decision-making to closely mimic human decision-making. Moreover, different stakeholders have different explanation needs, thus HCXAI could be a solution to focus on humane ethical decision-making instead of pure technical choices.
To tackle this knowledge gap, this article aims to design an actionable HCXAI ethical framework adopting a transdisciplinary approach that merges academic and practitioner knowledge and expertise from the AI, XAI, HCXAI, design science, and healthcare domains. To demonstrate the applicability of the proposed actionable framework in real scenarios and settings while reflecting on human decision-making, two use cases are considered. The first one is on AI-based interpretation of MRI scans and the second one on the application of smart flooring.
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of responsible technology
Journal of responsible technology Information Systems, Artificial Intelligence, Human-Computer Interaction
CiteScore
3.60
自引率
0.00%
发文量
0
审稿时长
168 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信