To Err is Automation: Can Trust be Repaired by the Automated Driving System After its Failure?

IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Peng Liu;Yueying Chu;Guanqun Wang;Zhigang Xu
{"title":"To Err is Automation: Can Trust be Repaired by the Automated Driving System After its Failure?","authors":"Peng Liu;Yueying Chu;Guanqun Wang;Zhigang Xu","doi":"10.1109/THMS.2024.3434680","DOIUrl":null,"url":null,"abstract":"Failures of the automated driving system (ADS) in automated vehicles (AVs) can damage driver–ADS cooperation (e.g., causing trust damage) and traffic safety. Researchers suggest infusing a human-like ability, active trust repair, into automated systems, to mitigate broken trust and other negative impacts resulting from their failures. Trust repair is regarded as a key ergonomic design in automated systems. Trust repair strategies (e.g., apology) are examined and supported by some evidence in controlled environments, however, rarely subjected to empirical evaluations in more naturalistic environments. To fill this gap, we conducted a test track study, invited participants (\n<italic>N</i>\n = 257) to experience an ADS failure, and tested the influence of the ADS’ trust repair on trust and other psychological responses. Half of participants (\n<italic>n</i>\n = 128) received the ADS’ verbal message (consisting of apology, explanation, and promise) by a human voice (\n<italic>n</i>\n = 63) or by Apple's Siri (\n<italic>n</i>\n = 65) after its failure. We measured seven psychological responses to AVs and ADS [e.g., trust and behavioral intention (BI)]. We found that both strategies cannot repair damaged trust. The human-voice-repair strategy can to some degree mitigate other detrimental influences (e.g., reductions in BI) resulting from the ADS failure, but this effect is only notable among participants without substantial driving experience. It points to the importance of conducting ecologically valid and field studies for validating human-like trust repair strategies in human–automation interaction and of developing trust repair strategies specific to safety-critical situations.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.5000,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Human-Machine Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10636033/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Failures of the automated driving system (ADS) in automated vehicles (AVs) can damage driver–ADS cooperation (e.g., causing trust damage) and traffic safety. Researchers suggest infusing a human-like ability, active trust repair, into automated systems, to mitigate broken trust and other negative impacts resulting from their failures. Trust repair is regarded as a key ergonomic design in automated systems. Trust repair strategies (e.g., apology) are examined and supported by some evidence in controlled environments, however, rarely subjected to empirical evaluations in more naturalistic environments. To fill this gap, we conducted a test track study, invited participants ( N = 257) to experience an ADS failure, and tested the influence of the ADS’ trust repair on trust and other psychological responses. Half of participants ( n = 128) received the ADS’ verbal message (consisting of apology, explanation, and promise) by a human voice ( n = 63) or by Apple's Siri ( n = 65) after its failure. We measured seven psychological responses to AVs and ADS [e.g., trust and behavioral intention (BI)]. We found that both strategies cannot repair damaged trust. The human-voice-repair strategy can to some degree mitigate other detrimental influences (e.g., reductions in BI) resulting from the ADS failure, but this effect is only notable among participants without substantial driving experience. It points to the importance of conducting ecologically valid and field studies for validating human-like trust repair strategies in human–automation interaction and of developing trust repair strategies specific to safety-critical situations.
错误就是自动化:自动驾驶系统失灵后能否修复信任?
自动驾驶汽车(AV)中自动驾驶系统(ADS)的故障会损害驾驶员与自动驾驶系统之间的合作(例如,造成信任受损)和交通安全。研究人员建议在自动驾驶系统中注入一种类似于人类的能力--主动信任修复,以减轻信任缺失和系统故障造成的其他负面影响。信任修复被认为是自动化系统中一项关键的人机工程学设计。信任修复策略(如道歉)在受控环境中得到了研究和一些证据的支持,但很少在更自然的环境中进行实证评估。为了填补这一空白,我们进行了一项测试跟踪研究,邀请参与者(257 人)体验一次自动驾驶辅助系统故障,并测试自动驾驶辅助系统的信任修复对信任和其他心理反应的影响。半数参与者(n = 128)在自动驾驶汽车出现故障后收到了由人工声音(n = 63)或苹果 Siri(n = 65)发出的口头信息(包括道歉、解释和承诺)。我们测量了对 AV 和 ADS 的七种心理反应[如信任和行为意向(BI)]。我们发现,这两种策略都无法修复受损的信任。人工语音修复策略可以在一定程度上缓解因自动驾驶辅助系统故障造成的其他不利影响(如行为意向的降低),但这种效果只在没有丰富驾驶经验的参与者中明显。这表明,在人机交互过程中进行生态验证和实地研究以验证类似人类的信任修复策略,以及开发专门针对安全关键情况的信任修复策略非常重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Human-Machine Systems
IEEE Transactions on Human-Machine Systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
7.10
自引率
11.10%
发文量
136
期刊介绍: The scope of the IEEE Transactions on Human-Machine Systems includes the fields of human machine systems. It covers human systems and human organizational interactions including cognitive ergonomics, system test and evaluation, and human information processing concerns in systems and organizations.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信