Trust in Autonomous Cars Does Not Largely Differ from Trust in Human Drivers when They Make Minor Errors

Ryosuke Yokoi
{"title":"Trust in Autonomous Cars Does Not Largely Differ from Trust in Human Drivers when They Make Minor Errors","authors":"Ryosuke Yokoi","doi":"10.1177/03611981241263350","DOIUrl":null,"url":null,"abstract":"Studies have explored the factor of trust in autonomous cars (ACs), and it has been shown that their ability and performance are crucial for determining trust. However, little is known about the effects of minor errors without involving negative consequences such as property damage and fatalities. People are likely to expect automation technologies to perform better than humans. It was, therefore, hypothesized that minor errors would destroy expectations and significantly decrease trust in ACs. This study aimed to investigate whether minor errors have a more negative effect on trust in ACs than in human drivers. Two experiments were conducted ( N = 821) in Japan. Two independent variables were manipulated: agent (AC and human) and error (error and no-error). Some participants were shown videos depicting ACs and human drivers making minor errors, such as taking a longer time to park (Experiment 1) and delaying to take off when traffic lights turned green (Experiment 2). These minor errors did not violate Japanese traffic laws. Others watched videos in which no errors occurred. The results of the two-way analysis of variance did not show evidence that the agent type modulated the negative effects of these minor errors on trust. Minor errors did not lead to a significant difference in trust levels between ACs and human drivers. This study also indicated that people expected ACs to not make more errors than humans did. However, these expectations did not increase trust in ACs. The findings also suggest that minor errors are unlikely to cause an underestimation of ACs’ capabilities.","PeriodicalId":309251,"journal":{"name":"Transportation Research Record: Journal of the Transportation Research Board","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transportation Research Record: Journal of the Transportation Research Board","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/03611981241263350","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Studies have explored the factor of trust in autonomous cars (ACs), and it has been shown that their ability and performance are crucial for determining trust. However, little is known about the effects of minor errors without involving negative consequences such as property damage and fatalities. People are likely to expect automation technologies to perform better than humans. It was, therefore, hypothesized that minor errors would destroy expectations and significantly decrease trust in ACs. This study aimed to investigate whether minor errors have a more negative effect on trust in ACs than in human drivers. Two experiments were conducted ( N = 821) in Japan. Two independent variables were manipulated: agent (AC and human) and error (error and no-error). Some participants were shown videos depicting ACs and human drivers making minor errors, such as taking a longer time to park (Experiment 1) and delaying to take off when traffic lights turned green (Experiment 2). These minor errors did not violate Japanese traffic laws. Others watched videos in which no errors occurred. The results of the two-way analysis of variance did not show evidence that the agent type modulated the negative effects of these minor errors on trust. Minor errors did not lead to a significant difference in trust levels between ACs and human drivers. This study also indicated that people expected ACs to not make more errors than humans did. However, these expectations did not increase trust in ACs. The findings also suggest that minor errors are unlikely to cause an underestimation of ACs’ capabilities.
当自动驾驶汽车出现轻微错误时,人们对自动驾驶汽车的信任度与对人类驾驶员的信任度差别不大
已有研究探讨了自动驾驶汽车(AC)的信任因素,研究表明,自动驾驶汽车的能力和性能是决定信任度的关键。然而,人们对不涉及财产损失和人员死亡等负面后果的轻微错误的影响知之甚少。人们可能期望自动化技术比人类表现得更好。因此,假设轻微错误会破坏人们的期望,并大大降低对自动化控制的信任。本研究旨在探讨轻微错误对自动驾驶汽车信任度的负面影响是否大于对人类驾驶员的信任度。在日本进行了两次实验(N = 821)。实验操纵了两个自变量:代理人(自动驾驶汽车和人类)和错误(错误和无错误)。一些实验参与者观看了一些视频,视频中的自动驾驶汽车和人类驾驶员都犯了一些小错误,例如停车时间较长(实验 1)和在交通信号灯变绿时延迟起飞(实验 2)。这些小错误并不违反日本的交通法规。其他人则观看了没有发生错误的视频。双向方差分析结果显示,没有证据表明代理人类型调节了这些小错误对信任的负面影响。轻微错误并未导致自动驾驶汽车与人类驾驶员之间信任度的显著差异。这项研究还表明,人们期望自动驾驶汽车不会比人类驾驶员犯更多的错误。然而,这些期望并没有增加人们对自动驾驶汽车的信任。研究结果还表明,轻微错误不太可能导致对自动驾驶汽车能力的低估。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信