{"title":"Human or robot? Exploring different avatar appearances to increase perceived security in shared automated vehicles","authors":"Martina Schuß, Luca Pizzoni, Andreas Riener","doi":"10.1007/s12193-024-00436-x","DOIUrl":null,"url":null,"abstract":"<p>Shared Automated Vehicles (SAVs) promise to make automated mobility accessible to a wide range of people while reducing air pollution and improving traffic flow. In the future, these vehicles will operate with no human driver on board, which poses several challenges that might differ depending on the cultural context and make one-fits-all solutions demanding. A promising substitute for the driver could be Digital Companions (DCs), i.e. conversational agents presented on a screen inside the vehicles. We conducted interviews with Colombian participants and workshops with German and Korean participants and derived two design concepts of DCs as an alternative for the human driver on SAVs: a human-like and a robot-like. We compared these two concepts to a baseline without companion using a scenario-based online questionnaire with participants from Colombia (N = 57), Germany (N = 50), and Korea (N = 29) measuring anxiety, security, trust, risk, control, threat, and user experience. In comparison with the baseline, both DCs are statistically significantly perceived as more positively. While we found a preference for the human-like DC among all participants, this preference is higher among Colombians while Koreans show the highest openness towards the robot-like DC.</p>","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"11 1","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal on Multimodal User Interfaces","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12193-024-00436-x","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Shared Automated Vehicles (SAVs) promise to make automated mobility accessible to a wide range of people while reducing air pollution and improving traffic flow. In the future, these vehicles will operate with no human driver on board, which poses several challenges that might differ depending on the cultural context and make one-fits-all solutions demanding. A promising substitute for the driver could be Digital Companions (DCs), i.e. conversational agents presented on a screen inside the vehicles. We conducted interviews with Colombian participants and workshops with German and Korean participants and derived two design concepts of DCs as an alternative for the human driver on SAVs: a human-like and a robot-like. We compared these two concepts to a baseline without companion using a scenario-based online questionnaire with participants from Colombia (N = 57), Germany (N = 50), and Korea (N = 29) measuring anxiety, security, trust, risk, control, threat, and user experience. In comparison with the baseline, both DCs are statistically significantly perceived as more positively. While we found a preference for the human-like DC among all participants, this preference is higher among Colombians while Koreans show the highest openness towards the robot-like DC.
共享自动驾驶汽车(SAVs)有望在减少空气污染和改善交通流量的同时,让更多人享受到自动驾驶汽车带来的便利。未来,这些车辆将在没有人类驾驶员的情况下运行,这将带来若干挑战,这些挑战可能因文化背景不同而各异,并使 "一刀切 "的解决方案变得十分困难。数字伴侣(DCs),即在车内屏幕上显示的对话代理,可能会成为驾驶员的一个有前途的替代品。我们对哥伦比亚的参与者进行了访谈,并与德国和韩国的参与者进行了研讨,得出了作为 SAV 上人类驾驶员替代品的两个 DC 设计概念:类人和类机器人。我们将这两个概念与无陪伴基线进行了比较,采用基于场景的在线问卷调查,对来自哥伦比亚(57 人)、德国(50 人)和韩国(29 人)的参与者进行了焦虑、安全、信任、风险、控制、威胁和用户体验方面的测量。与基线相比,从统计学角度看,两种直流电源都被认为更积极。虽然我们发现所有参与者都偏好类人 DC,但哥伦比亚人的偏好度更高,而韩国人对类机器人 DC 的开放度最高。
期刊介绍:
The Journal of Multimodal User Interfaces publishes work in the design, implementation and evaluation of multimodal interfaces. Research in the domain of multimodal interaction is by its very essence a multidisciplinary area involving several fields including signal processing, human-machine interaction, computer science, cognitive science and ergonomics. This journal focuses on multimodal interfaces involving advanced modalities, several modalities and their fusion, user-centric design, usability and architectural considerations. Use cases and descriptions of specific application areas are welcome including for example e-learning, assistance, serious games, affective and social computing, interaction with avatars and robots.