多模态交互博弈中异质智能体信任的形成

M. Kirtay, Erhan Öztop, A. Kuhlen, M. Asada, V. Hafner
{"title":"多模态交互博弈中异质智能体信任的形成","authors":"M. Kirtay, Erhan Öztop, A. Kuhlen, M. Asada, V. Hafner","doi":"10.1109/ICDL53763.2022.9962212","DOIUrl":null,"url":null,"abstract":"This study presents a robot trust model based on cognitive load that uses multimodal cues in a learning setting to assess the trustworthiness of heterogeneous interaction partners. As a test-bed, we designed an interactive task where a small humanoid robot, Nao, is asked to perform a sequential audio-visual pattern recall task while minimizing its cognitive load by receiving help from its interaction partner, either a robot, Pepper, or a human. The partner displayed one of three guiding strategies, reliable, unreliable, or random. The robot is equipped with two cognitive modules: a multimodal auto-associative memory and an internal reward module. The former represents the multimodal cognitive processing of the robot and allows a ‘cognitive load’ or ‘cost’ to be assigned to the processing that takes place, while the latter converts the cognitive processing cost to an internal reward signal that drives the cost-based behavior learning. Here, the robot asks for help from its interaction partner when its action leads to a high cognitive load. Then the robot receives an action suggestion from the partner and follows it. After performing interactive experiments with each partner, the robot uses the cognitive load yielded during the interaction to assess the trustworthiness of the partners –i.e., it associates high trustworthiness with low cognitive load. We then give a free choice to the robot to select the trustworthy interaction partner to perform the next task. Our results show that, overall, the robot selects partners with reliable guiding strategies. Moreover, the robot’s ability to identify a trustworthy partner was unaffected by whether the partner was a human or a robot.","PeriodicalId":274171,"journal":{"name":"2022 IEEE International Conference on Development and Learning (ICDL)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Forming robot trust in heterogeneous agents during a multimodal interactive game\",\"authors\":\"M. Kirtay, Erhan Öztop, A. Kuhlen, M. Asada, V. Hafner\",\"doi\":\"10.1109/ICDL53763.2022.9962212\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This study presents a robot trust model based on cognitive load that uses multimodal cues in a learning setting to assess the trustworthiness of heterogeneous interaction partners. As a test-bed, we designed an interactive task where a small humanoid robot, Nao, is asked to perform a sequential audio-visual pattern recall task while minimizing its cognitive load by receiving help from its interaction partner, either a robot, Pepper, or a human. The partner displayed one of three guiding strategies, reliable, unreliable, or random. The robot is equipped with two cognitive modules: a multimodal auto-associative memory and an internal reward module. The former represents the multimodal cognitive processing of the robot and allows a ‘cognitive load’ or ‘cost’ to be assigned to the processing that takes place, while the latter converts the cognitive processing cost to an internal reward signal that drives the cost-based behavior learning. Here, the robot asks for help from its interaction partner when its action leads to a high cognitive load. Then the robot receives an action suggestion from the partner and follows it. After performing interactive experiments with each partner, the robot uses the cognitive load yielded during the interaction to assess the trustworthiness of the partners –i.e., it associates high trustworthiness with low cognitive load. We then give a free choice to the robot to select the trustworthy interaction partner to perform the next task. Our results show that, overall, the robot selects partners with reliable guiding strategies. Moreover, the robot’s ability to identify a trustworthy partner was unaffected by whether the partner was a human or a robot.\",\"PeriodicalId\":274171,\"journal\":{\"name\":\"2022 IEEE International Conference on Development and Learning (ICDL)\",\"volume\":\"3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Development and Learning (ICDL)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDL53763.2022.9962212\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Development and Learning (ICDL)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDL53763.2022.9962212","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

本研究提出了一个基于认知负荷的机器人信任模型,该模型在学习环境中使用多模态线索来评估异质交互伙伴的可信度。作为测试平台,我们设计了一个交互式任务,要求一个小型人形机器人Nao执行顺序视听模式回忆任务,同时通过接受其交互伙伴(机器人Pepper或人类)的帮助,将其认知负荷降至最低。搭档展示了三种指导策略中的一种,可靠、不可靠或随机。机器人配备了两个认知模块:一个多模态自动联想记忆模块和一个内部奖励模块。前者代表了机器人的多模态认知处理,并允许将“认知负荷”或“成本”分配给所发生的处理,而后者将认知处理成本转换为驱动基于成本的行为学习的内部奖励信号。在这里,当机器人的行为导致高认知负荷时,它会向它的互动伙伴寻求帮助。然后,机器人接收到同伴的动作建议并执行。在与每个合作伙伴进行交互实验后,机器人使用交互过程中产生的认知负荷来评估合作伙伴的可信度-即。它将高可信度与低认知负荷联系在一起。然后,我们给机器人一个自由的选择,选择值得信赖的互动伙伴来执行下一个任务。我们的研究结果表明,总体而言,机器人选择具有可靠引导策略的伙伴。此外,机器人识别值得信赖的伙伴的能力不受伙伴是人还是机器人的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Forming robot trust in heterogeneous agents during a multimodal interactive game
This study presents a robot trust model based on cognitive load that uses multimodal cues in a learning setting to assess the trustworthiness of heterogeneous interaction partners. As a test-bed, we designed an interactive task where a small humanoid robot, Nao, is asked to perform a sequential audio-visual pattern recall task while minimizing its cognitive load by receiving help from its interaction partner, either a robot, Pepper, or a human. The partner displayed one of three guiding strategies, reliable, unreliable, or random. The robot is equipped with two cognitive modules: a multimodal auto-associative memory and an internal reward module. The former represents the multimodal cognitive processing of the robot and allows a ‘cognitive load’ or ‘cost’ to be assigned to the processing that takes place, while the latter converts the cognitive processing cost to an internal reward signal that drives the cost-based behavior learning. Here, the robot asks for help from its interaction partner when its action leads to a high cognitive load. Then the robot receives an action suggestion from the partner and follows it. After performing interactive experiments with each partner, the robot uses the cognitive load yielded during the interaction to assess the trustworthiness of the partners –i.e., it associates high trustworthiness with low cognitive load. We then give a free choice to the robot to select the trustworthy interaction partner to perform the next task. Our results show that, overall, the robot selects partners with reliable guiding strategies. Moreover, the robot’s ability to identify a trustworthy partner was unaffected by whether the partner was a human or a robot.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信