Real-Time Gazed Object Identification with a Variable Point of View Using a Mobile Service Robot

Akishige Yuguchi, Tomoaki Inoue, G. A. G. Ricardez, Ming Ding, J. Takamatsu, T. Ogasawara
{"title":"Real-Time Gazed Object Identification with a Variable Point of View Using a Mobile Service Robot","authors":"Akishige Yuguchi, Tomoaki Inoue, G. A. G. Ricardez, Ming Ding, J. Takamatsu, T. Ogasawara","doi":"10.1109/RO-MAN46459.2019.8956451","DOIUrl":null,"url":null,"abstract":"As sensing and image recognition technologies advance, the environments where service robots operate expand into human-centered environments. Since the roles of service robots depend on the user situations, it is important for the robots to understand human intentions. Gaze information, such as gazed objects (i. e., the objects humans are looking at) can help to understand the users’ intentions. In this paper, we propose a real-time gazed object identification method from RGBD images captured by a camera mounted on a mobile service robot. First, we search for the candidate gazed objects using state-of-the-art, real-time object detection. Second, we estimate the human face direction using facial landmarks extracted by a real-time face detection tool. Then, by searching for an object along the estimated face direction, we identify the gazed object. If the gazed object identification fails even though a user is looking at an object, i. e., has a fixed gaze direction, the robot can determine whether the object is inside or outside the robot’s view based on the face direction, and, then, change its point of view to improve the identification. Finally, through multiple evaluation experiments with the mobile service robot Pepper, we verified the effectiveness of the proposed identification and the improvement of the identification accuracy by changing the robot’s point of view.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN46459.2019.8956451","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

As sensing and image recognition technologies advance, the environments where service robots operate expand into human-centered environments. Since the roles of service robots depend on the user situations, it is important for the robots to understand human intentions. Gaze information, such as gazed objects (i. e., the objects humans are looking at) can help to understand the users’ intentions. In this paper, we propose a real-time gazed object identification method from RGBD images captured by a camera mounted on a mobile service robot. First, we search for the candidate gazed objects using state-of-the-art, real-time object detection. Second, we estimate the human face direction using facial landmarks extracted by a real-time face detection tool. Then, by searching for an object along the estimated face direction, we identify the gazed object. If the gazed object identification fails even though a user is looking at an object, i. e., has a fixed gaze direction, the robot can determine whether the object is inside or outside the robot’s view based on the face direction, and, then, change its point of view to improve the identification. Finally, through multiple evaluation experiments with the mobile service robot Pepper, we verified the effectiveness of the proposed identification and the improvement of the identification accuracy by changing the robot’s point of view.
基于移动服务机器人的可变视点实时凝视目标识别
随着传感和图像识别技术的进步,服务机器人的工作环境扩大到以人为中心的环境。由于服务机器人的角色取决于用户的情况,因此机器人理解人类的意图是很重要的。凝视信息,如被凝视的物体(即人类正在看的物体)可以帮助理解用户的意图。在本文中,我们提出了一种基于安装在移动服务机器人上的相机捕获的RGBD图像的实时凝视目标识别方法。首先,我们使用最先进的实时目标检测来搜索候选凝视对象。其次,我们利用实时人脸检测工具提取的人脸特征来估计人脸方向。然后,沿着估计的人脸方向搜索目标,识别被凝视的目标。如果用户正看着一个物体,即有固定的凝视方向,但被凝视的物体识别失败,机器人可以根据面部方向判断该物体在机器人视野内还是在机器人视野外,然后改变其视角来提高识别。最后,通过移动服务机器人Pepper的多次评估实验,验证了所提出的识别方法的有效性,以及通过改变机器人的视角来提高识别精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信