Tobias Rieger , Hanna Schindler , Linda Onnasch , Eileen Roesler
{"title":"解释人工智能的弱点可以提高人类和人工智能在动态控制任务中的表现","authors":"Tobias Rieger , Hanna Schindler , Linda Onnasch , Eileen Roesler","doi":"10.1016/j.ijhcs.2025.103505","DOIUrl":null,"url":null,"abstract":"<div><div>AI-based decision support is increasingly implemented to support operators in dynamic control tasks. While these systems continuously improve, to truly achieve human–system synergy, one must also study humans’ system understanding and behavior. Accordingly, we investigated the impact of explainability instructions regarding a specific system weakness on performance and trust in two experiments (with higher task demands in Experiment 2). Participants performed a dynamic control task with support from either an explainable AI (XAI, information on a system weakness), a non-explainable AI (nonXAI, no information on system weakness), or without support (manual, only in Experiment 2). Results show that participants with XAI support outperformed those in the nonXAI group, particularly in situations where the AI actually erred. Notably, informing users of system weaknesses did not affect trust once they had interacted with the system. In addition, Experiment 2 showed the general benefit of decision support over working manually under higher task demands. These findings suggest that AI support can enhance performance in complex tasks and that providing information on potential system weaknesses aids in managing system errors and resource allocation without compromising trust.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"199 ","pages":"Article 103505"},"PeriodicalIF":5.1000,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explaining AI weaknesses improves human–AI performance in a dynamic control task\",\"authors\":\"Tobias Rieger , Hanna Schindler , Linda Onnasch , Eileen Roesler\",\"doi\":\"10.1016/j.ijhcs.2025.103505\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>AI-based decision support is increasingly implemented to support operators in dynamic control tasks. While these systems continuously improve, to truly achieve human–system synergy, one must also study humans’ system understanding and behavior. Accordingly, we investigated the impact of explainability instructions regarding a specific system weakness on performance and trust in two experiments (with higher task demands in Experiment 2). Participants performed a dynamic control task with support from either an explainable AI (XAI, information on a system weakness), a non-explainable AI (nonXAI, no information on system weakness), or without support (manual, only in Experiment 2). Results show that participants with XAI support outperformed those in the nonXAI group, particularly in situations where the AI actually erred. Notably, informing users of system weaknesses did not affect trust once they had interacted with the system. In addition, Experiment 2 showed the general benefit of decision support over working manually under higher task demands. These findings suggest that AI support can enhance performance in complex tasks and that providing information on potential system weaknesses aids in managing system errors and resource allocation without compromising trust.</div></div>\",\"PeriodicalId\":54955,\"journal\":{\"name\":\"International Journal of Human-Computer Studies\",\"volume\":\"199 \",\"pages\":\"Article 103505\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2025-03-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Human-Computer Studies\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S107158192500062X\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Human-Computer Studies","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S107158192500062X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
Explaining AI weaknesses improves human–AI performance in a dynamic control task
AI-based decision support is increasingly implemented to support operators in dynamic control tasks. While these systems continuously improve, to truly achieve human–system synergy, one must also study humans’ system understanding and behavior. Accordingly, we investigated the impact of explainability instructions regarding a specific system weakness on performance and trust in two experiments (with higher task demands in Experiment 2). Participants performed a dynamic control task with support from either an explainable AI (XAI, information on a system weakness), a non-explainable AI (nonXAI, no information on system weakness), or without support (manual, only in Experiment 2). Results show that participants with XAI support outperformed those in the nonXAI group, particularly in situations where the AI actually erred. Notably, informing users of system weaknesses did not affect trust once they had interacted with the system. In addition, Experiment 2 showed the general benefit of decision support over working manually under higher task demands. These findings suggest that AI support can enhance performance in complex tasks and that providing information on potential system weaknesses aids in managing system errors and resource allocation without compromising trust.
期刊介绍:
The International Journal of Human-Computer Studies publishes original research over the whole spectrum of work relevant to the theory and practice of innovative interactive systems. The journal is inherently interdisciplinary, covering research in computing, artificial intelligence, psychology, linguistics, communication, design, engineering, and social organization, which is relevant to the design, analysis, evaluation and application of innovative interactive systems. Papers at the boundaries of these disciplines are especially welcome, as it is our view that interdisciplinary approaches are needed for producing theoretical insights in this complex area and for effective deployment of innovative technologies in concrete user communities.
Research areas relevant to the journal include, but are not limited to:
• Innovative interaction techniques
• Multimodal interaction
• Speech interaction
• Graphic interaction
• Natural language interaction
• Interaction in mobile and embedded systems
• Interface design and evaluation methodologies
• Design and evaluation of innovative interactive systems
• User interface prototyping and management systems
• Ubiquitous computing
• Wearable computers
• Pervasive computing
• Affective computing
• Empirical studies of user behaviour
• Empirical studies of programming and software engineering
• Computer supported cooperative work
• Computer mediated communication
• Virtual reality
• Mixed and augmented Reality
• Intelligent user interfaces
• Presence
...