{"title":"GMM-based detection of human hand actions for robot spatial attention","authors":"Riccardo Monica, J. Aleotti, S. Caselli","doi":"10.1109/ICAR.2015.7251500","DOIUrl":null,"url":null,"abstract":"In this paper, a spatial attention approach is presented for a robot manipulator equipped with a Kinect range sensor in eye-in-hand configuration. The location of salient object manipulation actions performed by the user is detected by analyzing the motion of the user hand. Relevance of user activities is determined by an attentional approach based on Gaussian mixture models. A next best view planner focuses the viewpoint of the eye-in-hand sensor towards the regions of the workspace that are most salient. 3D scene representation is updated by using a modified version of the KinectFusion algorithm that exploits the robot kinematics. Experiments are reported comparing two variations of next best view strategies.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Advanced Robotics (ICAR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAR.2015.7251500","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, a spatial attention approach is presented for a robot manipulator equipped with a Kinect range sensor in eye-in-hand configuration. The location of salient object manipulation actions performed by the user is detected by analyzing the motion of the user hand. Relevance of user activities is determined by an attentional approach based on Gaussian mixture models. A next best view planner focuses the viewpoint of the eye-in-hand sensor towards the regions of the workspace that are most salient. 3D scene representation is updated by using a modified version of the KinectFusion algorithm that exploits the robot kinematics. Experiments are reported comparing two variations of next best view strategies.