相信我!我是一个机器人:机器人-机器人交互中脚手架的情感计算描述

M. Kirtay, Erhan Öztop, M. Asada, V. Hafner
{"title":"相信我!我是一个机器人:机器人-机器人交互中脚手架的情感计算描述","authors":"M. Kirtay, Erhan Öztop, M. Asada, V. Hafner","doi":"10.1109/RO-MAN50785.2021.9515494","DOIUrl":null,"url":null,"abstract":"Forming trust in a biological or artificial interaction partner that provides reliable strategies and employing the learned strategies to scaffold another agent are critical problems that are often addressed separately in human-robot and robot-robot interaction studies. In this paper, we provide a unified approach to address these issues in robot-robot interaction settings. To be concrete, we present a trust-based affective computational account of scaffolding while performing a sequential visual recalling task. In that, we endow the Pepper humanoid robot with cognitive modules of auto-associative memory and internal reward generation to implement the trust model. The former module is an instance of a cognitive function with an associated neural cost determining the cognitive load of performing visual memory recall. The latter module uses this cost to generate an internal reward signal to facilitate neural cost-based reinforcement learning (RL) in an interactive scenario involving online instructors with different guiding strategies: reliable, less-reliable, and random. These cognitive modules allow the Pepper robot to assess the instructors based on the average cumulative reward it can collect and choose the instructor that helps reduce its cognitive load most as the trustworthy one. After determining the trustworthy instructor, the Pepper robot is recruited to be a caregiver robot to guide a perceptually limited infant robot (i.e., the Nao robot) that performs the same task. In this setting, we equip the Pepper robot with a simple theory of mind module that learns the state-action-reward associations by observing the infant robot’s behavior and guides the learning of the infant robot, similar to when it went through the online agent-robot interactions. The experiment results on this robot-robot interaction scenario indicate that the Pepper robot as a caregiver leverages the decision-making policies – obtained by interacting with the trustworthy instructor– to guide the infant robot to perform the same task efficiently. Overall, this study suggests how robotic-trust can be grounded in human-robot or robot-robot interactions based on cognitive load, and be used as a mechanism to choose the right scaffolding agent for effective knowledge transfer.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"221 1","pages":"189-196"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Trust me! I am a robot: an affective computational account of scaffolding in robot-robot interaction\",\"authors\":\"M. Kirtay, Erhan Öztop, M. Asada, V. Hafner\",\"doi\":\"10.1109/RO-MAN50785.2021.9515494\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Forming trust in a biological or artificial interaction partner that provides reliable strategies and employing the learned strategies to scaffold another agent are critical problems that are often addressed separately in human-robot and robot-robot interaction studies. In this paper, we provide a unified approach to address these issues in robot-robot interaction settings. To be concrete, we present a trust-based affective computational account of scaffolding while performing a sequential visual recalling task. In that, we endow the Pepper humanoid robot with cognitive modules of auto-associative memory and internal reward generation to implement the trust model. The former module is an instance of a cognitive function with an associated neural cost determining the cognitive load of performing visual memory recall. The latter module uses this cost to generate an internal reward signal to facilitate neural cost-based reinforcement learning (RL) in an interactive scenario involving online instructors with different guiding strategies: reliable, less-reliable, and random. These cognitive modules allow the Pepper robot to assess the instructors based on the average cumulative reward it can collect and choose the instructor that helps reduce its cognitive load most as the trustworthy one. After determining the trustworthy instructor, the Pepper robot is recruited to be a caregiver robot to guide a perceptually limited infant robot (i.e., the Nao robot) that performs the same task. In this setting, we equip the Pepper robot with a simple theory of mind module that learns the state-action-reward associations by observing the infant robot’s behavior and guides the learning of the infant robot, similar to when it went through the online agent-robot interactions. The experiment results on this robot-robot interaction scenario indicate that the Pepper robot as a caregiver leverages the decision-making policies – obtained by interacting with the trustworthy instructor– to guide the infant robot to perform the same task efficiently. Overall, this study suggests how robotic-trust can be grounded in human-robot or robot-robot interactions based on cognitive load, and be used as a mechanism to choose the right scaffolding agent for effective knowledge transfer.\",\"PeriodicalId\":6854,\"journal\":{\"name\":\"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)\",\"volume\":\"221 1\",\"pages\":\"189-196\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/RO-MAN50785.2021.9515494\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN50785.2021.9515494","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

摘要

在提供可靠策略的生物或人工交互伙伴中形成信任,并利用学习到的策略来支撑另一个代理,是人-机器人和机器人-机器人交互研究中经常单独解决的关键问题。在本文中,我们提供了一种统一的方法来解决机器人-机器人交互设置中的这些问题。具体地说,我们在执行顺序视觉回忆任务时提出了基于信任的脚手架情感计算帐户。其中,我们赋予Pepper人形机器人自联想记忆和内部奖励生成的认知模块来实现信任模型。前一个模块是认知功能的一个实例,其相关的神经成本决定了执行视觉记忆回忆的认知负荷。后一个模块使用这个成本来生成一个内部奖励信号,以促进基于神经成本的强化学习(RL),在一个交互式场景中,在线教师采用不同的指导策略:可靠的、不可靠的和随机的。这些认知模块允许Pepper机器人根据它能收集到的平均累积奖励来评估教练,并选择最有助于减少其认知负荷的教练作为值得信赖的教练。在确定值得信赖的指导员之后,Pepper机器人被招募为看护机器人,指导一个感知能力有限的婴儿机器人(即Nao机器人)执行相同的任务。在这种情况下,我们为Pepper机器人配备了一个简单的心智理论模块,该模块通过观察婴儿机器人的行为来学习状态-行动-奖励关联,并指导婴儿机器人的学习,类似于在线智能体-机器人交互。在这个机器人-机器人交互场景下的实验结果表明,Pepper机器人作为看护人,利用与值得信赖的指导者交互获得的决策策略,指导婴儿机器人有效地执行相同的任务。综上所述,本研究提出了机器人信任如何以基于认知负荷的人-机器人或机器人-机器人交互为基础,并作为一种机制来选择正确的脚手架代理进行有效的知识转移。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Trust me! I am a robot: an affective computational account of scaffolding in robot-robot interaction
Forming trust in a biological or artificial interaction partner that provides reliable strategies and employing the learned strategies to scaffold another agent are critical problems that are often addressed separately in human-robot and robot-robot interaction studies. In this paper, we provide a unified approach to address these issues in robot-robot interaction settings. To be concrete, we present a trust-based affective computational account of scaffolding while performing a sequential visual recalling task. In that, we endow the Pepper humanoid robot with cognitive modules of auto-associative memory and internal reward generation to implement the trust model. The former module is an instance of a cognitive function with an associated neural cost determining the cognitive load of performing visual memory recall. The latter module uses this cost to generate an internal reward signal to facilitate neural cost-based reinforcement learning (RL) in an interactive scenario involving online instructors with different guiding strategies: reliable, less-reliable, and random. These cognitive modules allow the Pepper robot to assess the instructors based on the average cumulative reward it can collect and choose the instructor that helps reduce its cognitive load most as the trustworthy one. After determining the trustworthy instructor, the Pepper robot is recruited to be a caregiver robot to guide a perceptually limited infant robot (i.e., the Nao robot) that performs the same task. In this setting, we equip the Pepper robot with a simple theory of mind module that learns the state-action-reward associations by observing the infant robot’s behavior and guides the learning of the infant robot, similar to when it went through the online agent-robot interactions. The experiment results on this robot-robot interaction scenario indicate that the Pepper robot as a caregiver leverages the decision-making policies – obtained by interacting with the trustworthy instructor– to guide the infant robot to perform the same task efficiently. Overall, this study suggests how robotic-trust can be grounded in human-robot or robot-robot interactions based on cognitive load, and be used as a mechanism to choose the right scaffolding agent for effective knowledge transfer.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信