Tingru Zhang , Weitao Li , Weixing Huang , Liang Ma
{"title":"Critical roles of explainability in shaping perception, trust, and acceptance of autonomous vehicles","authors":"Tingru Zhang , Weitao Li , Weixing Huang , Liang Ma","doi":"10.1016/j.ergon.2024.103568","DOIUrl":null,"url":null,"abstract":"<div><p>Despite the advancements in autonomous vehicles (AVs) and their potential benefits, widespread acceptance of AVs remains low due to the significant barrier of trust. While prior research has explored various factors influencing trust towards AVs, the role of explainability—AVs’ ability to describe the rationale behind their outputs in human-understandable terms—has been largely overlooked. This study aimed to investigate how the perceived explainability of AVs impacts driver perception, trust, and the acceptance of AVs. For this end, an enhanced AV acceptance model that incorporates novel features such as perceived explainability and perceived intelligence was proposed. In order to validate the proposed model, a survey was conducted in which participants were exposed to different AV introductions (<em>basic</em> introduction, <em>video</em> introduction, or introduction with <em>how</em> + <em>why</em> explanations). The responses of 399 participants were analyzed using structural equation modeling. The results showed that perceived explainability had the most profound impact on trust, exerting both direct and indirect effects. AVs perceived as more explainable were also considered easier to use, more useful, safer, and more intelligent, which in turn fostered trust and acceptance. Additionally, the impact of perceived intelligence on trust was significant, indicating that drivers view AVs as intelligent agents rather than mere passive tools. While traditional factors such as perceived ease of use and perceived usefulness remained significant predictors of trust, their effects were smaller than perceived explainability and perceived intelligence. These findings underscore the importance of considering the role of explainability in AV design to address the trust-related challenges associated with AV adoption.</p></div>","PeriodicalId":50317,"journal":{"name":"International Journal of Industrial Ergonomics","volume":"100 ","pages":"Article 103568"},"PeriodicalIF":2.5000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Industrial Ergonomics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0169814124000246","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
引用次数: 0
Abstract
Despite the advancements in autonomous vehicles (AVs) and their potential benefits, widespread acceptance of AVs remains low due to the significant barrier of trust. While prior research has explored various factors influencing trust towards AVs, the role of explainability—AVs’ ability to describe the rationale behind their outputs in human-understandable terms—has been largely overlooked. This study aimed to investigate how the perceived explainability of AVs impacts driver perception, trust, and the acceptance of AVs. For this end, an enhanced AV acceptance model that incorporates novel features such as perceived explainability and perceived intelligence was proposed. In order to validate the proposed model, a survey was conducted in which participants were exposed to different AV introductions (basic introduction, video introduction, or introduction with how + why explanations). The responses of 399 participants were analyzed using structural equation modeling. The results showed that perceived explainability had the most profound impact on trust, exerting both direct and indirect effects. AVs perceived as more explainable were also considered easier to use, more useful, safer, and more intelligent, which in turn fostered trust and acceptance. Additionally, the impact of perceived intelligence on trust was significant, indicating that drivers view AVs as intelligent agents rather than mere passive tools. While traditional factors such as perceived ease of use and perceived usefulness remained significant predictors of trust, their effects were smaller than perceived explainability and perceived intelligence. These findings underscore the importance of considering the role of explainability in AV design to address the trust-related challenges associated with AV adoption.
期刊介绍:
The journal publishes original contributions that add to our understanding of the role of humans in today systems and the interactions thereof with various system components. The journal typically covers the following areas: industrial and occupational ergonomics, design of systems, tools and equipment, human performance measurement and modeling, human productivity, humans in technologically complex systems, and safety. The focus of the articles includes basic theoretical advances, applications, case studies, new methodologies and procedures; and empirical studies.