可解释性在形成对自动驾驶汽车的认知、信任和接受度方面的关键作用

IF 2.5 2区 工程技术 Q2 ENGINEERING, INDUSTRIAL
Tingru Zhang , Weitao Li , Weixing Huang , Liang Ma
{"title":"可解释性在形成对自动驾驶汽车的认知、信任和接受度方面的关键作用","authors":"Tingru Zhang ,&nbsp;Weitao Li ,&nbsp;Weixing Huang ,&nbsp;Liang Ma","doi":"10.1016/j.ergon.2024.103568","DOIUrl":null,"url":null,"abstract":"<div><p>Despite the advancements in autonomous vehicles (AVs) and their potential benefits, widespread acceptance of AVs remains low due to the significant barrier of trust. While prior research has explored various factors influencing trust towards AVs, the role of explainability—AVs’ ability to describe the rationale behind their outputs in human-understandable terms—has been largely overlooked. This study aimed to investigate how the perceived explainability of AVs impacts driver perception, trust, and the acceptance of AVs. For this end, an enhanced AV acceptance model that incorporates novel features such as perceived explainability and perceived intelligence was proposed. In order to validate the proposed model, a survey was conducted in which participants were exposed to different AV introductions (<em>basic</em> introduction, <em>video</em> introduction, or introduction with <em>how</em> + <em>why</em> explanations). The responses of 399 participants were analyzed using structural equation modeling. The results showed that perceived explainability had the most profound impact on trust, exerting both direct and indirect effects. AVs perceived as more explainable were also considered easier to use, more useful, safer, and more intelligent, which in turn fostered trust and acceptance. Additionally, the impact of perceived intelligence on trust was significant, indicating that drivers view AVs as intelligent agents rather than mere passive tools. While traditional factors such as perceived ease of use and perceived usefulness remained significant predictors of trust, their effects were smaller than perceived explainability and perceived intelligence. These findings underscore the importance of considering the role of explainability in AV design to address the trust-related challenges associated with AV adoption.</p></div>","PeriodicalId":50317,"journal":{"name":"International Journal of Industrial Ergonomics","volume":null,"pages":null},"PeriodicalIF":2.5000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Critical roles of explainability in shaping perception, trust, and acceptance of autonomous vehicles\",\"authors\":\"Tingru Zhang ,&nbsp;Weitao Li ,&nbsp;Weixing Huang ,&nbsp;Liang Ma\",\"doi\":\"10.1016/j.ergon.2024.103568\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Despite the advancements in autonomous vehicles (AVs) and their potential benefits, widespread acceptance of AVs remains low due to the significant barrier of trust. While prior research has explored various factors influencing trust towards AVs, the role of explainability—AVs’ ability to describe the rationale behind their outputs in human-understandable terms—has been largely overlooked. This study aimed to investigate how the perceived explainability of AVs impacts driver perception, trust, and the acceptance of AVs. For this end, an enhanced AV acceptance model that incorporates novel features such as perceived explainability and perceived intelligence was proposed. In order to validate the proposed model, a survey was conducted in which participants were exposed to different AV introductions (<em>basic</em> introduction, <em>video</em> introduction, or introduction with <em>how</em> + <em>why</em> explanations). The responses of 399 participants were analyzed using structural equation modeling. The results showed that perceived explainability had the most profound impact on trust, exerting both direct and indirect effects. AVs perceived as more explainable were also considered easier to use, more useful, safer, and more intelligent, which in turn fostered trust and acceptance. Additionally, the impact of perceived intelligence on trust was significant, indicating that drivers view AVs as intelligent agents rather than mere passive tools. While traditional factors such as perceived ease of use and perceived usefulness remained significant predictors of trust, their effects were smaller than perceived explainability and perceived intelligence. These findings underscore the importance of considering the role of explainability in AV design to address the trust-related challenges associated with AV adoption.</p></div>\",\"PeriodicalId\":50317,\"journal\":{\"name\":\"International Journal of Industrial Ergonomics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2024-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Industrial Ergonomics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0169814124000246\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, INDUSTRIAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Industrial Ergonomics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0169814124000246","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
引用次数: 0

摘要

尽管自动驾驶汽车(AVs)取得了进步并具有潜在的益处,但由于存在巨大的信任障碍,人们对自动驾驶汽车的广泛接受度仍然很低。以往的研究探讨了影响人们对自动驾驶汽车信任度的各种因素,但在很大程度上忽略了可解释性的作用--自动驾驶汽车用人类可理解的语言描述其输出背后原理的能力。本研究旨在探究人们所感知的自动驾驶汽车可解释性如何影响驾驶员对自动驾驶汽车的感知、信任和接受。为此,研究人员提出了一个增强型自动驾驶汽车接受度模型,其中包含了感知可解释性和感知智能等新特征。为了验证所提出的模型,我们进行了一项调查,让参与者接触不同的自动驾驶汽车介绍(基本介绍、视频介绍或带有 "如何+为什么 "解释的介绍)。使用结构方程模型对 399 名参与者的回答进行了分析。结果表明,可解释性对信任的影响最为深远,既有直接影响,也有间接影响。被认为可解释性更强的 AV 也被认为更容易使用、更有用、更安全和更智能,这反过来又促进了信任和接受。此外,智能感知对信任度的影响也很显著,这表明驾驶者将自动驾驶汽车视为智能代理,而不仅仅是被动的工具。虽然感知易用性和感知有用性等传统因素仍然是信任的重要预测因素,但它们的影响小于感知可解释性和感知智能性。这些发现强调了在设计自动驾驶汽车时考虑可解释性的重要性,以应对与采用自动驾驶汽车相关的信任挑战。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Critical roles of explainability in shaping perception, trust, and acceptance of autonomous vehicles

Despite the advancements in autonomous vehicles (AVs) and their potential benefits, widespread acceptance of AVs remains low due to the significant barrier of trust. While prior research has explored various factors influencing trust towards AVs, the role of explainability—AVs’ ability to describe the rationale behind their outputs in human-understandable terms—has been largely overlooked. This study aimed to investigate how the perceived explainability of AVs impacts driver perception, trust, and the acceptance of AVs. For this end, an enhanced AV acceptance model that incorporates novel features such as perceived explainability and perceived intelligence was proposed. In order to validate the proposed model, a survey was conducted in which participants were exposed to different AV introductions (basic introduction, video introduction, or introduction with how + why explanations). The responses of 399 participants were analyzed using structural equation modeling. The results showed that perceived explainability had the most profound impact on trust, exerting both direct and indirect effects. AVs perceived as more explainable were also considered easier to use, more useful, safer, and more intelligent, which in turn fostered trust and acceptance. Additionally, the impact of perceived intelligence on trust was significant, indicating that drivers view AVs as intelligent agents rather than mere passive tools. While traditional factors such as perceived ease of use and perceived usefulness remained significant predictors of trust, their effects were smaller than perceived explainability and perceived intelligence. These findings underscore the importance of considering the role of explainability in AV design to address the trust-related challenges associated with AV adoption.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Industrial Ergonomics
International Journal of Industrial Ergonomics 工程技术-工程:工业
CiteScore
6.40
自引率
12.90%
发文量
110
审稿时长
56 days
期刊介绍: The journal publishes original contributions that add to our understanding of the role of humans in today systems and the interactions thereof with various system components. The journal typically covers the following areas: industrial and occupational ergonomics, design of systems, tools and equipment, human performance measurement and modeling, human productivity, humans in technologically complex systems, and safety. The focus of the articles includes basic theoretical advances, applications, case studies, new methodologies and procedures; and empirical studies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信