{"title":"一种可穿戴的盲人机器人物体操作辅助装置用于辅助物体抓取的人机交互","authors":"Lingqiu Jin, He Zhang, Yantao Shen, C. Ye","doi":"10.1109/ICHMS49158.2020.9209377","DOIUrl":null,"url":null,"abstract":"This paper presents a new hand-worn device, called wearable robotic object manipulation aid (W-ROMA), that can help a visually impaired individual locate a target object and guide the hand to take a hold of it. W-ROMA may assist the individual for navigational (e.g., grasping a chair and moving it to make path) or non-navigational purpose (e.g, grasping a mug). The device consists of a sensing unit and a guiding unit. The sensing unit uses a Structure Core sensor, comprising of an RGB-D camera and an Inertial Measurement Unit (IMU), to detect the target object and estimate the device pose. Based on the object and pose information, the guiding unit computes the Desired Hand Movement (DHM) and convey it to the user by an electro-tactile display to guide the hand to approach the object. A speech interface is developed and used as an additional way to convey the DHM and used for human-robot interaction. A new method, called Depth Enhanced Visual-Inertial Odometry (DVIO), is proposed for 6-DOF device pose estimation. It tightly couples the camera’s depth and visual data with the IMU data in a graph optimization process to produce more accurate pose estimation than the existing state-of-the-art approach. The estimated poses are used to “stitched” the imaging and point cloud data captured at different points to form a larger view of the scene for object detection. They can also be used to position the individual for wayfinding. Experimental results demonstrate that the DVIO method outperforms the state-of-the-art VIO approach in 6-DOF pose estimation.","PeriodicalId":132917,"journal":{"name":"2020 IEEE International Conference on Human-Machine Systems (ICHMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Human-Robot Interaction for Assisted Object Grasping by a Wearable Robotic Object Manipulation Aid for the Blind\",\"authors\":\"Lingqiu Jin, He Zhang, Yantao Shen, C. Ye\",\"doi\":\"10.1109/ICHMS49158.2020.9209377\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a new hand-worn device, called wearable robotic object manipulation aid (W-ROMA), that can help a visually impaired individual locate a target object and guide the hand to take a hold of it. W-ROMA may assist the individual for navigational (e.g., grasping a chair and moving it to make path) or non-navigational purpose (e.g, grasping a mug). The device consists of a sensing unit and a guiding unit. The sensing unit uses a Structure Core sensor, comprising of an RGB-D camera and an Inertial Measurement Unit (IMU), to detect the target object and estimate the device pose. Based on the object and pose information, the guiding unit computes the Desired Hand Movement (DHM) and convey it to the user by an electro-tactile display to guide the hand to approach the object. A speech interface is developed and used as an additional way to convey the DHM and used for human-robot interaction. A new method, called Depth Enhanced Visual-Inertial Odometry (DVIO), is proposed for 6-DOF device pose estimation. It tightly couples the camera’s depth and visual data with the IMU data in a graph optimization process to produce more accurate pose estimation than the existing state-of-the-art approach. The estimated poses are used to “stitched” the imaging and point cloud data captured at different points to form a larger view of the scene for object detection. They can also be used to position the individual for wayfinding. Experimental results demonstrate that the DVIO method outperforms the state-of-the-art VIO approach in 6-DOF pose estimation.\",\"PeriodicalId\":132917,\"journal\":{\"name\":\"2020 IEEE International Conference on Human-Machine Systems (ICHMS)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Conference on Human-Machine Systems (ICHMS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICHMS49158.2020.9209377\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Human-Machine Systems (ICHMS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICHMS49158.2020.9209377","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Human-Robot Interaction for Assisted Object Grasping by a Wearable Robotic Object Manipulation Aid for the Blind
This paper presents a new hand-worn device, called wearable robotic object manipulation aid (W-ROMA), that can help a visually impaired individual locate a target object and guide the hand to take a hold of it. W-ROMA may assist the individual for navigational (e.g., grasping a chair and moving it to make path) or non-navigational purpose (e.g, grasping a mug). The device consists of a sensing unit and a guiding unit. The sensing unit uses a Structure Core sensor, comprising of an RGB-D camera and an Inertial Measurement Unit (IMU), to detect the target object and estimate the device pose. Based on the object and pose information, the guiding unit computes the Desired Hand Movement (DHM) and convey it to the user by an electro-tactile display to guide the hand to approach the object. A speech interface is developed and used as an additional way to convey the DHM and used for human-robot interaction. A new method, called Depth Enhanced Visual-Inertial Odometry (DVIO), is proposed for 6-DOF device pose estimation. It tightly couples the camera’s depth and visual data with the IMU data in a graph optimization process to produce more accurate pose estimation than the existing state-of-the-art approach. The estimated poses are used to “stitched” the imaging and point cloud data captured at different points to form a larger view of the scene for object detection. They can also be used to position the individual for wayfinding. Experimental results demonstrate that the DVIO method outperforms the state-of-the-art VIO approach in 6-DOF pose estimation.