Persie Rolley-Parnell, D. Kanoulas, Arturo Laurenzi, Brian Delhaisse, L. Rozo, D. Caldwell, N. Tsagarakis
{"title":"Bi-Manual Articulated Robot Teleoperation using an External RGB-D Range Sensor","authors":"Persie Rolley-Parnell, D. Kanoulas, Arturo Laurenzi, Brian Delhaisse, L. Rozo, D. Caldwell, N. Tsagarakis","doi":"10.1109/ICARCV.2018.8581174","DOIUrl":null,"url":null,"abstract":"In this paper, we present an implementation of a bi-manual teleoperation system, controlled by a human through three-dimensional (3D) skeleton extraction. The input data is given from a cheap RGB-D range sensor, such as the ASUS Xtion PRO. To achieve this, we have implemented a 3D version of the impressive OpenPose package, which was recently developed. The first stage of our method contains the execution of the OpenPose Convolutional Neural Network (CNN), using a sequence of RGB images as input. The extracted human skeleton pose localisation in two-dimensions (2D) is followed by the mapping of the extracted joint location estimations into their 3D pose in the camera frame. The output of this process is then used as input to drive the end-pose of the robotic hands relative to the human hand movements, through a whole-body inverse kinematics process in the Cartesian space. Finally, we implement the method as a ROS wrapper package and we test it on the centaur-like CENTAURO robot. Our demonstrated task is of a box and lever manipulation in real-time, as a result of a human task demonstration.","PeriodicalId":395380,"journal":{"name":"2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"25","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICARCV.2018.8581174","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 25
Abstract
In this paper, we present an implementation of a bi-manual teleoperation system, controlled by a human through three-dimensional (3D) skeleton extraction. The input data is given from a cheap RGB-D range sensor, such as the ASUS Xtion PRO. To achieve this, we have implemented a 3D version of the impressive OpenPose package, which was recently developed. The first stage of our method contains the execution of the OpenPose Convolutional Neural Network (CNN), using a sequence of RGB images as input. The extracted human skeleton pose localisation in two-dimensions (2D) is followed by the mapping of the extracted joint location estimations into their 3D pose in the camera frame. The output of this process is then used as input to drive the end-pose of the robotic hands relative to the human hand movements, through a whole-body inverse kinematics process in the Cartesian space. Finally, we implement the method as a ROS wrapper package and we test it on the centaur-like CENTAURO robot. Our demonstrated task is of a box and lever manipulation in real-time, as a result of a human task demonstration.