{"title":"在人工智能临床诊断系统中使用渐进式披露实现选择性透明度","authors":"Deepa Muralidhar , Rafik Belloum , Ashwin Ashok","doi":"10.1016/j.ijhcs.2025.103591","DOIUrl":null,"url":null,"abstract":"<div><div>Explainable AI (XAI) is critical for clinical decision support systems (AI-CDSS) in healthcare, but current approaches often neglect the usability of explanations from a human–computer interaction (HCI) perspective. We investigate progressive disclosure as a strategy for selective transparency to provide effective explanations without overwhelming users. This paper presents a user-centered design of AI-CDSS interface prototypes that incorporate interactive explanation features (e.g., keyword highlighting of medical terms and interactive causal diagrams) and empathy-oriented nudges (e.g., supportive prompts and icons). We evaluated these prototypes through interviews with medical professionals and students, followed by a user study with general users, to assess their impact on understanding, trust, and satisfaction. Our findings suggest that progressive, on-demand disclosure of explanation details may help users manage information load and better follow the AI’s reasoning process. While several interface features were well received, some elements such as affective cues like emojis elicited skepticism, particularly in clinical contexts, which underscores the importance of context-sensitive design choices.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"204 ","pages":"Article 103591"},"PeriodicalIF":5.1000,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Operationalizing selective transparency using progressive disclosure in artificial intelligence clinical diagnosis systems\",\"authors\":\"Deepa Muralidhar , Rafik Belloum , Ashwin Ashok\",\"doi\":\"10.1016/j.ijhcs.2025.103591\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Explainable AI (XAI) is critical for clinical decision support systems (AI-CDSS) in healthcare, but current approaches often neglect the usability of explanations from a human–computer interaction (HCI) perspective. We investigate progressive disclosure as a strategy for selective transparency to provide effective explanations without overwhelming users. This paper presents a user-centered design of AI-CDSS interface prototypes that incorporate interactive explanation features (e.g., keyword highlighting of medical terms and interactive causal diagrams) and empathy-oriented nudges (e.g., supportive prompts and icons). We evaluated these prototypes through interviews with medical professionals and students, followed by a user study with general users, to assess their impact on understanding, trust, and satisfaction. Our findings suggest that progressive, on-demand disclosure of explanation details may help users manage information load and better follow the AI’s reasoning process. While several interface features were well received, some elements such as affective cues like emojis elicited skepticism, particularly in clinical contexts, which underscores the importance of context-sensitive design choices.</div></div>\",\"PeriodicalId\":54955,\"journal\":{\"name\":\"International Journal of Human-Computer Studies\",\"volume\":\"204 \",\"pages\":\"Article 103591\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2025-08-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Human-Computer Studies\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S107158192500148X\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Human-Computer Studies","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S107158192500148X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
Operationalizing selective transparency using progressive disclosure in artificial intelligence clinical diagnosis systems
Explainable AI (XAI) is critical for clinical decision support systems (AI-CDSS) in healthcare, but current approaches often neglect the usability of explanations from a human–computer interaction (HCI) perspective. We investigate progressive disclosure as a strategy for selective transparency to provide effective explanations without overwhelming users. This paper presents a user-centered design of AI-CDSS interface prototypes that incorporate interactive explanation features (e.g., keyword highlighting of medical terms and interactive causal diagrams) and empathy-oriented nudges (e.g., supportive prompts and icons). We evaluated these prototypes through interviews with medical professionals and students, followed by a user study with general users, to assess their impact on understanding, trust, and satisfaction. Our findings suggest that progressive, on-demand disclosure of explanation details may help users manage information load and better follow the AI’s reasoning process. While several interface features were well received, some elements such as affective cues like emojis elicited skepticism, particularly in clinical contexts, which underscores the importance of context-sensitive design choices.
期刊介绍:
The International Journal of Human-Computer Studies publishes original research over the whole spectrum of work relevant to the theory and practice of innovative interactive systems. The journal is inherently interdisciplinary, covering research in computing, artificial intelligence, psychology, linguistics, communication, design, engineering, and social organization, which is relevant to the design, analysis, evaluation and application of innovative interactive systems. Papers at the boundaries of these disciplines are especially welcome, as it is our view that interdisciplinary approaches are needed for producing theoretical insights in this complex area and for effective deployment of innovative technologies in concrete user communities.
Research areas relevant to the journal include, but are not limited to:
• Innovative interaction techniques
• Multimodal interaction
• Speech interaction
• Graphic interaction
• Natural language interaction
• Interaction in mobile and embedded systems
• Interface design and evaluation methodologies
• Design and evaluation of innovative interactive systems
• User interface prototyping and management systems
• Ubiquitous computing
• Wearable computers
• Pervasive computing
• Affective computing
• Empirical studies of user behaviour
• Empirical studies of programming and software engineering
• Computer supported cooperative work
• Computer mediated communication
• Virtual reality
• Mixed and augmented Reality
• Intelligent user interfaces
• Presence
...