人类信任违背道德信任的机器人吗?

Zahra Rezaei Khavas, Monish Reddy Kotturu, S.Reza Ahmadzadeh, Paul Robinette
{"title":"人类信任违背道德信任的机器人吗?","authors":"Zahra Rezaei Khavas, Monish Reddy Kotturu, S.Reza Ahmadzadeh, Paul Robinette","doi":"10.1145/3651992","DOIUrl":null,"url":null,"abstract":"The increasing use of robots in social applications requires further research on human-robot trust. The research on human-robot trust needs to go beyond the conventional definition that mainly focuses on how human-robot relations are influenced by robot performance. The emerging field of social robotics considers optimizing a robot’s personality a critical factor in user perceptions of experienced human-robot interaction (HRI). Researchers have developed trust scales that account for different dimensions of trust in HRI. These trust scales consider one performance aspect (i.e., the trust in an agent’s competence to perform a given task and their proficiency in executing the task accurately) and one moral aspect (i.e., trust in an agent’s honesty in fulfilling their stated commitments or promises) for human-robot trust. The question that arises here is to what extent do these trust aspects affect human trust in a robot? The main goal of this study is to investigate whether a robot’s undesirable behavior due to the performance trust violation would affect human trust differently than another similar undesirable behavior due to a moral trust violation. We designed and implemented an online human-robot collaborative search task that allows distinguishing between performance and moral trust violations by a robot. We ran these experiments on Prolific and recruited 100 participants for this study. Our results showed that a moral trust violation by a robot affects human trust more severely than a performance trust violation with the same magnitude and consequences.","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"262 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Do Humans Trust Robots that Violate moral trust?\",\"authors\":\"Zahra Rezaei Khavas, Monish Reddy Kotturu, S.Reza Ahmadzadeh, Paul Robinette\",\"doi\":\"10.1145/3651992\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The increasing use of robots in social applications requires further research on human-robot trust. The research on human-robot trust needs to go beyond the conventional definition that mainly focuses on how human-robot relations are influenced by robot performance. The emerging field of social robotics considers optimizing a robot’s personality a critical factor in user perceptions of experienced human-robot interaction (HRI). Researchers have developed trust scales that account for different dimensions of trust in HRI. These trust scales consider one performance aspect (i.e., the trust in an agent’s competence to perform a given task and their proficiency in executing the task accurately) and one moral aspect (i.e., trust in an agent’s honesty in fulfilling their stated commitments or promises) for human-robot trust. The question that arises here is to what extent do these trust aspects affect human trust in a robot? The main goal of this study is to investigate whether a robot’s undesirable behavior due to the performance trust violation would affect human trust differently than another similar undesirable behavior due to a moral trust violation. We designed and implemented an online human-robot collaborative search task that allows distinguishing between performance and moral trust violations by a robot. We ran these experiments on Prolific and recruited 100 participants for this study. Our results showed that a moral trust violation by a robot affects human trust more severely than a performance trust violation with the same magnitude and consequences.\",\"PeriodicalId\":504644,\"journal\":{\"name\":\"ACM Transactions on Human-Robot Interaction\",\"volume\":\"262 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Human-Robot Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3651992\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Human-Robot Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3651992","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

随着机器人在社会应用中的使用日益增多,需要进一步研究人与机器人之间的信任关系。对人机信任的研究需要超越传统的定义,即主要关注人机关系如何受到机器人性能的影响。新兴的社会机器人学领域认为,优化机器人的个性是用户体验人机交互(HRI)感知的关键因素。研究人员开发了信任度量表,考虑了人机交互中信任度的不同维度。这些信任量表考虑了人与机器人信任的一个表现方面(即对代理执行特定任务的能力及其准确执行任务的熟练程度的信任)和一个道德方面(即对代理履行其既定承诺或诺言的诚实程度的信任)。这里提出的问题是,这些信任方面在多大程度上会影响人类对机器人的信任?本研究的主要目的是调查机器人因违反性能信任而导致的不良行为与因违反道德信任而导致的类似不良行为对人类信任的影响是否不同。我们设计并实施了一项在线人机协作搜索任务,可以区分机器人违反性能信任和道德信任的行为。我们在 Prolific 上进行了这些实验,并招募了 100 名参与者参与这项研究。我们的结果表明,在程度和后果相同的情况下,机器人的道德失信行为比性能失信行为对人类信任的影响更为严重。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Do Humans Trust Robots that Violate moral trust?
The increasing use of robots in social applications requires further research on human-robot trust. The research on human-robot trust needs to go beyond the conventional definition that mainly focuses on how human-robot relations are influenced by robot performance. The emerging field of social robotics considers optimizing a robot’s personality a critical factor in user perceptions of experienced human-robot interaction (HRI). Researchers have developed trust scales that account for different dimensions of trust in HRI. These trust scales consider one performance aspect (i.e., the trust in an agent’s competence to perform a given task and their proficiency in executing the task accurately) and one moral aspect (i.e., trust in an agent’s honesty in fulfilling their stated commitments or promises) for human-robot trust. The question that arises here is to what extent do these trust aspects affect human trust in a robot? The main goal of this study is to investigate whether a robot’s undesirable behavior due to the performance trust violation would affect human trust differently than another similar undesirable behavior due to a moral trust violation. We designed and implemented an online human-robot collaborative search task that allows distinguishing between performance and moral trust violations by a robot. We ran these experiments on Prolific and recruited 100 participants for this study. Our results showed that a moral trust violation by a robot affects human trust more severely than a performance trust violation with the same magnitude and consequences.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信