An analysis of the role of different levels of exchange of explicit information in human-robot cooperation.

IF 2.9 Q2 ROBOTICS
Frontiers in Robotics and AI Pub Date : 2025-02-10 eCollection Date: 2025-01-01 DOI:10.3389/frobt.2025.1511619
Ane San Martin, Johan Kildal, Elena Lazkano
{"title":"An analysis of the role of different levels of exchange of explicit information in human-robot cooperation.","authors":"Ane San Martin, Johan Kildal, Elena Lazkano","doi":"10.3389/frobt.2025.1511619","DOIUrl":null,"url":null,"abstract":"<p><p>For smooth human-robot cooperation, it is crucial that robots understand social cues from humans and respond accordingly. Contextual information provides the human partner with real-time insights into how the robot interprets social cues and what action decisions it makes as a result. We propose and implement a novel design for a human-robot cooperation framework that uses augmented reality and user gaze to enable bidirectional communication. Through this framework, the robot can recognize the objects in the scene that the human is looking at and infer the human's intentions within the context of the cooperative task. We proposed three levels of exchange of explicit information designs, each providing increasingly more information. These designs enable the robot to offer contextual information about what user actions it has identified and how it intends to respond, which is in line with the goal of cooperation. We report a user study (n = 24) in which we analyzed the performance and user experience with the three different levels of exchange of explicit information. Results indicate that users preferred an intermediate level of exchange of information, in which users knew how the robot was interpreting their intentions, but where the robot was autonomous to take unsupervised action in response to gaze input from the user, needing a less informative action from the human's side.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1511619"},"PeriodicalIF":2.9000,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11848069/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Robotics and AI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frobt.2025.1511619","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

Abstract

For smooth human-robot cooperation, it is crucial that robots understand social cues from humans and respond accordingly. Contextual information provides the human partner with real-time insights into how the robot interprets social cues and what action decisions it makes as a result. We propose and implement a novel design for a human-robot cooperation framework that uses augmented reality and user gaze to enable bidirectional communication. Through this framework, the robot can recognize the objects in the scene that the human is looking at and infer the human's intentions within the context of the cooperative task. We proposed three levels of exchange of explicit information designs, each providing increasingly more information. These designs enable the robot to offer contextual information about what user actions it has identified and how it intends to respond, which is in line with the goal of cooperation. We report a user study (n = 24) in which we analyzed the performance and user experience with the three different levels of exchange of explicit information. Results indicate that users preferred an intermediate level of exchange of information, in which users knew how the robot was interpreting their intentions, but where the robot was autonomous to take unsupervised action in response to gaze input from the user, needing a less informative action from the human's side.

不同层次的显性信息交换在人机合作中的作用分析。
为了实现顺畅的人机合作,机器人理解人类的社交暗示并做出相应的反应是至关重要的。上下文信息可以让人类伴侣实时了解机器人如何解释社交线索,以及它最终做出了什么样的行动决定。我们提出并实现了一种新的人机合作框架设计,该框架使用增强现实和用户凝视来实现双向通信。通过这个框架,机器人可以识别人类正在看的场景中的物体,并在合作任务的背景下推断人类的意图。我们提出了三个层次的显式信息设计交换,每个层次都提供越来越多的信息。这些设计使机器人能够提供关于它识别到的用户动作以及它打算如何响应的上下文信息,这符合合作的目标。我们报告了一项用户研究(n = 24),其中我们分析了三种不同层次的显式信息交换的性能和用户体验。结果表明,用户更喜欢中间水平的信息交换,用户知道机器人是如何解释他们的意图的,但机器人是自主的,可以根据用户的注视输入采取无监督的行动,需要从人类方面提供较少的信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.50
自引率
5.90%
发文量
355
审稿时长
14 weeks
期刊介绍: Frontiers in Robotics and AI publishes rigorously peer-reviewed research covering all theory and applications of robotics, technology, and artificial intelligence, from biomedical to space robotics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信