Anthropomorphic Design and Self-Reported Behavioral Trust: The Case of a Virtual Assistant in a Highly Automated Car

IF 2.1 3区 工程技术 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC
Machines Pub Date : 2023-12-13 DOI:10.3390/machines11121087
Clarisse Lawson-Guidigbe, Kahina Amokrane-Ferka, Nicolas Louveton, Benoit Leblanc, Virgil Rousseaux, Jean-Marc André
{"title":"Anthropomorphic Design and Self-Reported Behavioral Trust: The Case of a Virtual Assistant in a Highly Automated Car","authors":"Clarisse Lawson-Guidigbe, Kahina Amokrane-Ferka, Nicolas Louveton, Benoit Leblanc, Virgil Rousseaux, Jean-Marc André","doi":"10.3390/machines11121087","DOIUrl":null,"url":null,"abstract":"The latest advances in car automation present new challenges in vehicle–driver interactions. Indeed, acceptance and adoption of high levels of automation (when full control of the driving task is given to the automated system) are conditioned by human factors such as user trust. In this work, we study the impact of anthropomorphic design on user trust in the context of a highly automated car. A virtual assistant was designed using two levels of anthropomorphic design: “voice-only” and “voice with visual appearance”. The visual appearance was a three-dimensional model, integrated as a hologram in the cockpit of a driving simulator. In a driving simulator study, we compared the three interfaces: two versions of the virtual assistant interface and the baseline interface with no anthropomorphic attributes. We measured trust versus perceived anthropomorphism. We also studied the evolution of trust throughout a range of driving scenarios. We finally analyzed participants’ reaction time to takeover request events. We found a significant correlation between perceived anthropomorphism and trust. However, the three interfaces tested did not significantly differentiate in terms of perceived anthropomorphism while trust converged over time across all our measurements. Finally, we found that the anthropomorphic assistant positively impacts reaction time for one takeover request scenario. We discuss methodological issues and implication for design and further research.","PeriodicalId":48519,"journal":{"name":"Machines","volume":"162 3","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machines","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.3390/machines11121087","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

The latest advances in car automation present new challenges in vehicle–driver interactions. Indeed, acceptance and adoption of high levels of automation (when full control of the driving task is given to the automated system) are conditioned by human factors such as user trust. In this work, we study the impact of anthropomorphic design on user trust in the context of a highly automated car. A virtual assistant was designed using two levels of anthropomorphic design: “voice-only” and “voice with visual appearance”. The visual appearance was a three-dimensional model, integrated as a hologram in the cockpit of a driving simulator. In a driving simulator study, we compared the three interfaces: two versions of the virtual assistant interface and the baseline interface with no anthropomorphic attributes. We measured trust versus perceived anthropomorphism. We also studied the evolution of trust throughout a range of driving scenarios. We finally analyzed participants’ reaction time to takeover request events. We found a significant correlation between perceived anthropomorphism and trust. However, the three interfaces tested did not significantly differentiate in terms of perceived anthropomorphism while trust converged over time across all our measurements. Finally, we found that the anthropomorphic assistant positively impacts reaction time for one takeover request scenario. We discuss methodological issues and implication for design and further research.
拟人化设计与自我行为信任:高度自动化汽车中的虚拟助理案例
汽车自动化的最新进展给车辆与驾驶员之间的互动带来了新的挑战。事实上,对高度自动化(当驾驶任务的控制权完全交给自动驾驶系统时)的接受和采用取决于用户信任等人为因素。在这项工作中,我们研究了拟人化设计对高度自动化汽车中用户信任度的影响。虚拟助手的设计采用了两种拟人化设计:"纯语音 "和 "带视觉外观的语音"。视觉外观是一个三维模型,作为全息图集成在驾驶模拟器的驾驶舱中。在驾驶模拟器研究中,我们比较了三种界面:两个版本的虚拟助手界面和没有拟人化属性的基线界面。我们测量了信任度与感知拟人化的关系。我们还研究了信任感在一系列驾驶场景中的演变。最后,我们分析了参与者对接管请求事件的反应时间。我们发现,感知拟人化与信任之间存在明显的相关性。然而,我们测试的三种界面在感知拟人化方面没有明显差异,而信任度则随着时间的推移在我们所有的测量中趋于一致。最后,我们发现拟人化助手对一个接管请求场景的反应时间有积极影响。我们讨论了方法问题以及对设计和进一步研究的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Machines
Machines Multiple-
CiteScore
3.00
自引率
26.90%
发文量
1012
审稿时长
11 weeks
期刊介绍: Machines (ISSN 2075-1702) is an international, peer-reviewed journal on machinery and engineering. It publishes research articles, reviews, short communications and letters. Our aim is to encourage scientists to publish their experimental and theoretical results in as much detail as possible. There is no restriction on the length of the papers. Full experimental and/or methodical details must be provided. There are, in addition, unique features of this journal: *manuscripts regarding research proposals and research ideas will be particularly welcomed *electronic files or software regarding the full details of the calculation and experimental procedure - if unable to be published in a normal way - can be deposited as supplementary material Subject Areas: applications of automation, systems and control engineering, electronic engineering, mechanical engineering, computer engineering, mechatronics, robotics, industrial design, human-machine-interfaces, mechanical systems, machines and related components, machine vision, history of technology and industrial revolution, turbo machinery, machine diagnostics and prognostics (condition monitoring), machine design.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信