过度信任机器人:制定研究议程以减轻对自动化的过度信任

A. M. Aroyo, Jan de Bruyne, Orian Dheu, E. Fosch-Villaronga, Aleksei Gudkov, Holly Hoch, Steve Jones, C. Lutz, H. Sætra, Mads Solberg, Aurelia Tamó-Larrieux
{"title":"过度信任机器人:制定研究议程以减轻对自动化的过度信任","authors":"A. M. Aroyo, Jan de Bruyne, Orian Dheu, E. Fosch-Villaronga, Aleksei Gudkov, Holly Hoch, Steve Jones, C. Lutz, H. Sætra, Mads Solberg, Aurelia Tamó-Larrieux","doi":"10.1515/pjbr-2021-0029","DOIUrl":null,"url":null,"abstract":"Abstract There is increasing attention given to the concept of trustworthiness for artificial intelligence and robotics. However, trust is highly context-dependent, varies among cultures, and requires reflection on others’ trustworthiness, appraising whether there is enough evidence to conclude that these agents deserve to be trusted. Moreover, little research exists on what happens when too much trust is placed in robots and autonomous systems. Conceptual clarity and a shared framework for approaching overtrust are missing. In this contribution, we offer an overview of pressing topics in the context of overtrust and robots and autonomous systems. Our review mobilizes insights solicited from in-depth conversations from a multidisciplinary workshop on the subject of trust in human–robot interaction (HRI), held at a leading robotics conference in 2020. A broad range of participants brought in their expertise, allowing the formulation of a forward-looking research agenda on overtrust and automation biases in robotics and autonomous systems. Key points include the need for multidisciplinary understandings that are situated in an eco-system perspective, the consideration of adjacent concepts such as deception and anthropomorphization, a connection to ongoing legal discussions through the topic of liability, and a socially embedded understanding of overtrust in education and literacy matters. The article integrates diverse literature and provides a ground for common understanding for overtrust in the context of HRI.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"14 1","pages":"423 - 436"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":"{\"title\":\"Overtrusting robots: Setting a research agenda to mitigate overtrust in automation\",\"authors\":\"A. M. Aroyo, Jan de Bruyne, Orian Dheu, E. Fosch-Villaronga, Aleksei Gudkov, Holly Hoch, Steve Jones, C. Lutz, H. Sætra, Mads Solberg, Aurelia Tamó-Larrieux\",\"doi\":\"10.1515/pjbr-2021-0029\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract There is increasing attention given to the concept of trustworthiness for artificial intelligence and robotics. However, trust is highly context-dependent, varies among cultures, and requires reflection on others’ trustworthiness, appraising whether there is enough evidence to conclude that these agents deserve to be trusted. Moreover, little research exists on what happens when too much trust is placed in robots and autonomous systems. Conceptual clarity and a shared framework for approaching overtrust are missing. In this contribution, we offer an overview of pressing topics in the context of overtrust and robots and autonomous systems. Our review mobilizes insights solicited from in-depth conversations from a multidisciplinary workshop on the subject of trust in human–robot interaction (HRI), held at a leading robotics conference in 2020. A broad range of participants brought in their expertise, allowing the formulation of a forward-looking research agenda on overtrust and automation biases in robotics and autonomous systems. Key points include the need for multidisciplinary understandings that are situated in an eco-system perspective, the consideration of adjacent concepts such as deception and anthropomorphization, a connection to ongoing legal discussions through the topic of liability, and a socially embedded understanding of overtrust in education and literacy matters. The article integrates diverse literature and provides a ground for common understanding for overtrust in the context of HRI.\",\"PeriodicalId\":90037,\"journal\":{\"name\":\"Paladyn : journal of behavioral robotics\",\"volume\":\"14 1\",\"pages\":\"423 - 436\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"21\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Paladyn : journal of behavioral robotics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1515/pjbr-2021-0029\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Paladyn : journal of behavioral robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1515/pjbr-2021-0029","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 21

摘要

摘要人工智能和机器人技术的可信度问题越来越受到人们的关注。然而,信任是高度情境依赖的,因文化而异,需要反思他人的可信度,评估是否有足够的证据得出这些代理人值得信任的结论。此外,关于过度信任机器人和自主系统会发生什么情况的研究很少。目前缺乏清晰的概念和解决过度信任问题的共同框架。在这篇文章中,我们概述了过度信任、机器人和自主系统背景下的紧迫主题。我们的评论动员了在2020年领先的机器人会议上举行的关于人机交互(HRI)信任主题的多学科研讨会的深入对话中征求的见解。广泛的参与者带来了他们的专业知识,从而制定了关于机器人和自主系统中过度信任和自动化偏见的前瞻性研究议程。关键点包括需要从生态系统的角度对多学科的理解,考虑到邻近的概念,如欺骗和拟人化,通过责任主题与正在进行的法律讨论的联系,以及对教育和扫盲问题中过度信任的社会嵌入理解。本文整合了多种文献,为在人力资源调查的背景下理解过度信任提供了一个基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Overtrusting robots: Setting a research agenda to mitigate overtrust in automation
Abstract There is increasing attention given to the concept of trustworthiness for artificial intelligence and robotics. However, trust is highly context-dependent, varies among cultures, and requires reflection on others’ trustworthiness, appraising whether there is enough evidence to conclude that these agents deserve to be trusted. Moreover, little research exists on what happens when too much trust is placed in robots and autonomous systems. Conceptual clarity and a shared framework for approaching overtrust are missing. In this contribution, we offer an overview of pressing topics in the context of overtrust and robots and autonomous systems. Our review mobilizes insights solicited from in-depth conversations from a multidisciplinary workshop on the subject of trust in human–robot interaction (HRI), held at a leading robotics conference in 2020. A broad range of participants brought in their expertise, allowing the formulation of a forward-looking research agenda on overtrust and automation biases in robotics and autonomous systems. Key points include the need for multidisciplinary understandings that are situated in an eco-system perspective, the consideration of adjacent concepts such as deception and anthropomorphization, a connection to ongoing legal discussions through the topic of liability, and a socially embedded understanding of overtrust in education and literacy matters. The article integrates diverse literature and provides a ground for common understanding for overtrust in the context of HRI.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信