{"title":"Explainability increases trust resilience in intelligent agents.","authors":"Min Xu, Yiwen Wang","doi":"10.1111/bjop.12740","DOIUrl":null,"url":null,"abstract":"<p><p>Even though artificial intelligence (AI)-based systems typically outperform human decision-makers, they are not immune to errors, leading users to lose trust in them and be less likely to use them again-a phenomenon known as algorithm aversion. The purpose of the present research was to investigate whether explainable AI (XAI) could function as a viable strategy to counter algorithm aversion. We conducted two experiments to examine how XAI influences users' willingness to continue using AI-based systems when these systems exhibit errors. The results showed that, following the observation of algorithms erring, the inclination of users to delegate decisions to or follow advice from intelligent agents significantly decreased compared to the period before the errors were revealed. However, the explainability effectively mitigated this decline, with users in the XAI condition being more likely to continue utilizing intelligent agents for subsequent tasks after seeing algorithms erring than those in the non-XAI condition. We further found that the explainability could reduce users' decision regret, and the decrease in decision regret mediated the relationship between the explainability and re-use behaviour. These findings underscore the adaptive function of XAI in alleviating negative user experiences and maintaining user trust in the context of imperfect AI.</p>","PeriodicalId":9300,"journal":{"name":"British journal of psychology","volume":null,"pages":null},"PeriodicalIF":3.2000,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"British journal of psychology","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1111/bjop.12740","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Even though artificial intelligence (AI)-based systems typically outperform human decision-makers, they are not immune to errors, leading users to lose trust in them and be less likely to use them again-a phenomenon known as algorithm aversion. The purpose of the present research was to investigate whether explainable AI (XAI) could function as a viable strategy to counter algorithm aversion. We conducted two experiments to examine how XAI influences users' willingness to continue using AI-based systems when these systems exhibit errors. The results showed that, following the observation of algorithms erring, the inclination of users to delegate decisions to or follow advice from intelligent agents significantly decreased compared to the period before the errors were revealed. However, the explainability effectively mitigated this decline, with users in the XAI condition being more likely to continue utilizing intelligent agents for subsequent tasks after seeing algorithms erring than those in the non-XAI condition. We further found that the explainability could reduce users' decision regret, and the decrease in decision regret mediated the relationship between the explainability and re-use behaviour. These findings underscore the adaptive function of XAI in alleviating negative user experiences and maintaining user trust in the context of imperfect AI.
期刊介绍:
The British Journal of Psychology publishes original research on all aspects of general psychology including cognition; health and clinical psychology; developmental, social and occupational psychology. For information on specific requirements, please view Notes for Contributors. We attract a large number of international submissions each year which make major contributions across the range of psychology.