{"title":"Being responsible or affable: Investigating the effects of AI error correction behaviors on user engagement","authors":"Yunchang Zhu, Xianghua Lu","doi":"10.1016/j.dss.2025.114542","DOIUrl":null,"url":null,"abstract":"<div><div>Affable design is increasingly employed in AI conversational agents to foster smoother interaction and enhance user experience. However, a growing concern is that this overemphasis on social appeal often overlooks corrective interventions, particularly when users hold false or biased beliefs. Such omissions carry the risk of reinforcing user misconceptions and ultimately undermining the effectiveness of human–AI collaboration. Drawing upon the attribution theory, this study investigates whether the error-correction behavior of AI agents offset these risks and improve user engagement. Empirical evidence from three experimental studies verifies that AI agents' error-correction behavior indeed enhances users' perceived responsibility of AI agents and strengthens their engagement intentions. This effect does not appear to compromise social comfort, especially in the context where responsibility takes precedence, such as healthcare. This study further finds that the high expertise of AI agents amplifies the positive effects of error-correction behavior, while high entitativity diminishes these effects by blurring AI agents' responsibility. These findings offer important guidance for designing responsible AI agents and highlight the value of AI error-correction behaviors in human-AI interaction.</div></div>","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":"198 ","pages":"Article 114542"},"PeriodicalIF":6.8000,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Decision Support Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167923625001435","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Affable design is increasingly employed in AI conversational agents to foster smoother interaction and enhance user experience. However, a growing concern is that this overemphasis on social appeal often overlooks corrective interventions, particularly when users hold false or biased beliefs. Such omissions carry the risk of reinforcing user misconceptions and ultimately undermining the effectiveness of human–AI collaboration. Drawing upon the attribution theory, this study investigates whether the error-correction behavior of AI agents offset these risks and improve user engagement. Empirical evidence from three experimental studies verifies that AI agents' error-correction behavior indeed enhances users' perceived responsibility of AI agents and strengthens their engagement intentions. This effect does not appear to compromise social comfort, especially in the context where responsibility takes precedence, such as healthcare. This study further finds that the high expertise of AI agents amplifies the positive effects of error-correction behavior, while high entitativity diminishes these effects by blurring AI agents' responsibility. These findings offer important guidance for designing responsible AI agents and highlight the value of AI error-correction behaviors in human-AI interaction.
期刊介绍:
The common thread of articles published in Decision Support Systems is their relevance to theoretical and technical issues in the support of enhanced decision making. The areas addressed may include foundations, functionality, interfaces, implementation, impacts, and evaluation of decision support systems (DSSs).