Akishige Yuguchi, Tomoaki Inoue, G. A. G. Ricardez, Ming Ding, J. Takamatsu, T. Ogasawara
{"title":"Real-Time Gazed Object Identification with a Variable Point of View Using a Mobile Service Robot","authors":"Akishige Yuguchi, Tomoaki Inoue, G. A. G. Ricardez, Ming Ding, J. Takamatsu, T. Ogasawara","doi":"10.1109/RO-MAN46459.2019.8956451","DOIUrl":null,"url":null,"abstract":"As sensing and image recognition technologies advance, the environments where service robots operate expand into human-centered environments. Since the roles of service robots depend on the user situations, it is important for the robots to understand human intentions. Gaze information, such as gazed objects (i. e., the objects humans are looking at) can help to understand the users’ intentions. In this paper, we propose a real-time gazed object identification method from RGBD images captured by a camera mounted on a mobile service robot. First, we search for the candidate gazed objects using state-of-the-art, real-time object detection. Second, we estimate the human face direction using facial landmarks extracted by a real-time face detection tool. Then, by searching for an object along the estimated face direction, we identify the gazed object. If the gazed object identification fails even though a user is looking at an object, i. e., has a fixed gaze direction, the robot can determine whether the object is inside or outside the robot’s view based on the face direction, and, then, change its point of view to improve the identification. Finally, through multiple evaluation experiments with the mobile service robot Pepper, we verified the effectiveness of the proposed identification and the improvement of the identification accuracy by changing the robot’s point of view.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN46459.2019.8956451","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
As sensing and image recognition technologies advance, the environments where service robots operate expand into human-centered environments. Since the roles of service robots depend on the user situations, it is important for the robots to understand human intentions. Gaze information, such as gazed objects (i. e., the objects humans are looking at) can help to understand the users’ intentions. In this paper, we propose a real-time gazed object identification method from RGBD images captured by a camera mounted on a mobile service robot. First, we search for the candidate gazed objects using state-of-the-art, real-time object detection. Second, we estimate the human face direction using facial landmarks extracted by a real-time face detection tool. Then, by searching for an object along the estimated face direction, we identify the gazed object. If the gazed object identification fails even though a user is looking at an object, i. e., has a fixed gaze direction, the robot can determine whether the object is inside or outside the robot’s view based on the face direction, and, then, change its point of view to improve the identification. Finally, through multiple evaluation experiments with the mobile service robot Pepper, we verified the effectiveness of the proposed identification and the improvement of the identification accuracy by changing the robot’s point of view.