A matter of consequences

IF 0.9 4区 心理学 Q3 COMMUNICATION
Alessandra Rossi, Kerstin Dautenhahn, Kheng Lee Koay, Michael L. Walters
{"title":"A matter of consequences","authors":"Alessandra Rossi, Kerstin Dautenhahn, Kheng Lee Koay, Michael L. Walters","doi":"10.1075/is.21025.ros","DOIUrl":null,"url":null,"abstract":"On reviewing the literature regarding acceptance and trust in human-robot interaction (HRI), there are a number of open questions that needed to be addressed in order to establish effective collaborations between humans and robots in real-world applications. In particular, we identified four principal open areas that should be investigated to create guidelines for the successful deployment of robots in the wild. These areas are focused on: (1) the robot’s abilities and limitations; in particular when it makes errors with different severity of consequences, (2) individual differences, (3) the dynamics of human-robot trust, and (4) the interaction between humans and robots over time. In this paper, we present two very similar studies, one with a virtual robot with human-like abilities, and one with a Care-O-bot 4 robot. In the first study, we create an immersive narrative using an interactive storyboard to collect responses of 154 participants. In the second study, 6 participants had repeated interactions over three weeks with a physical robot. We summarise and discuss the findings of our investigations of the effects of robots’ errors on people’s trust in robots for designing mechanisms that allow robots to recover from a breach of trust. In particular, we observed that robots’ errors had greater impact on people’s trust in the robot when the errors were made at the beginning of the interaction and had severe consequences. Our results also provided insights on how these errors vary according to the individuals’ personalities, expectations and previous experiences.","PeriodicalId":46494,"journal":{"name":"Interaction Studies","volume":"62 1","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2023-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Interaction Studies","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1075/is.21025.ros","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMMUNICATION","Score":null,"Total":0}
引用次数: 0

Abstract

On reviewing the literature regarding acceptance and trust in human-robot interaction (HRI), there are a number of open questions that needed to be addressed in order to establish effective collaborations between humans and robots in real-world applications. In particular, we identified four principal open areas that should be investigated to create guidelines for the successful deployment of robots in the wild. These areas are focused on: (1) the robot’s abilities and limitations; in particular when it makes errors with different severity of consequences, (2) individual differences, (3) the dynamics of human-robot trust, and (4) the interaction between humans and robots over time. In this paper, we present two very similar studies, one with a virtual robot with human-like abilities, and one with a Care-O-bot 4 robot. In the first study, we create an immersive narrative using an interactive storyboard to collect responses of 154 participants. In the second study, 6 participants had repeated interactions over three weeks with a physical robot. We summarise and discuss the findings of our investigations of the effects of robots’ errors on people’s trust in robots for designing mechanisms that allow robots to recover from a breach of trust. In particular, we observed that robots’ errors had greater impact on people’s trust in the robot when the errors were made at the beginning of the interaction and had severe consequences. Our results also provided insights on how these errors vary according to the individuals’ personalities, expectations and previous experiences.
后果问题
在对有关人机交互(HRI)中的接受度和信任度的文献进行回顾后,我们发现,为了在现实世界的应用中建立人类与机器人之间的有效合作,有许多开放性问题亟待解决。特别是,我们确定了四个主要的开放领域,需要对其进行研究,以便为在野外成功部署机器人制定指导方针。这些领域主要集中在:(1) 机器人的能力和局限性;特别是当机器人犯错时,其后果的严重程度不同;(2) 个体差异;(3) 人机信任的动态变化;(4) 随着时间的推移,人与机器人之间的互动。在本文中,我们将介绍两项非常相似的研究,一项是具有类人能力的虚拟机器人,另一项是 Care-O-bot 4 机器人。在第一项研究中,我们使用交互式故事板创建了一个沉浸式叙事,收集了 154 名参与者的反应。在第二项研究中,6 名参与者与实体机器人进行了为期三周的重复互动。我们总结并讨论了机器人出错对人们信任机器人的影响,从而设计出允许机器人从失信中恢复的机制。我们特别注意到,当机器人在互动开始时出错并造成严重后果时,机器人的错误会对人们对机器人的信任产生更大的影响。我们的研究结果还揭示了这些错误是如何根据个人的性格、期望和以往经验而变化的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
3.30
自引率
6.70%
发文量
8
期刊介绍: This international peer-reviewed journal aims to advance knowledge in the growing and strongly interdisciplinary area of Interaction Studies in biological and artificial systems. Understanding social behaviour and communication in biological and artificial systems requires knowledge of evolutionary, developmental and neurobiological aspects of social behaviour and communication; the embodied nature of interactions; origins and characteristics of social and narrative intelligence; perception, action and communication in the context of dynamic and social environments; social learning.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信