{"title":"基于多模态信息的机器人抓取结果预测","authors":"Chao Yang, Peng Du, F. Sun, Bin Fang, Jie Zhou","doi":"10.1109/ROBIO.2018.8665307","DOIUrl":null,"url":null,"abstract":"In the service robot application scenario, the stable grasp requires careful balancing the contact forces and the property of the manipulation objects, such as shape, weight. Deducing whether a particular grasp would be stable from indirect measurements, such as vision, is therefore quite challenging, and direct sensing of contacts through tactile sensor provides an appealing avenue toward more successful and consistent robotic grasping. Other than this, an object's shape and weight would also decide whether to grasping stabilize or not. In this work, we investigate the question of whether tactile information and object intrinsic property aid in predicting grasp outcomes within a multi-modal sensing framework that combines vision, tactile and object intrinsic property. To that end, we collected more than 2550 grasping trials using a 3-finger robot hand which mounted with multiple tactile sensors. We evaluated our multi-modal deep neural network models to directly predict grasp stability from either modality individually or multimodal modalities. Our experimental results indicate the visual combination of tactile readings and intrinsic properties of the object significantly improve grasping prediction performance.","PeriodicalId":417415,"journal":{"name":"2018 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Predict Robot Grasp Outcomes based on Multi-Modal Information\",\"authors\":\"Chao Yang, Peng Du, F. Sun, Bin Fang, Jie Zhou\",\"doi\":\"10.1109/ROBIO.2018.8665307\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the service robot application scenario, the stable grasp requires careful balancing the contact forces and the property of the manipulation objects, such as shape, weight. Deducing whether a particular grasp would be stable from indirect measurements, such as vision, is therefore quite challenging, and direct sensing of contacts through tactile sensor provides an appealing avenue toward more successful and consistent robotic grasping. Other than this, an object's shape and weight would also decide whether to grasping stabilize or not. In this work, we investigate the question of whether tactile information and object intrinsic property aid in predicting grasp outcomes within a multi-modal sensing framework that combines vision, tactile and object intrinsic property. To that end, we collected more than 2550 grasping trials using a 3-finger robot hand which mounted with multiple tactile sensors. We evaluated our multi-modal deep neural network models to directly predict grasp stability from either modality individually or multimodal modalities. Our experimental results indicate the visual combination of tactile readings and intrinsic properties of the object significantly improve grasping prediction performance.\",\"PeriodicalId\":417415,\"journal\":{\"name\":\"2018 IEEE International Conference on Robotics and Biomimetics (ROBIO)\",\"volume\":\"56 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE International Conference on Robotics and Biomimetics (ROBIO)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ROBIO.2018.8665307\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Robotics and Biomimetics (ROBIO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROBIO.2018.8665307","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Predict Robot Grasp Outcomes based on Multi-Modal Information
In the service robot application scenario, the stable grasp requires careful balancing the contact forces and the property of the manipulation objects, such as shape, weight. Deducing whether a particular grasp would be stable from indirect measurements, such as vision, is therefore quite challenging, and direct sensing of contacts through tactile sensor provides an appealing avenue toward more successful and consistent robotic grasping. Other than this, an object's shape and weight would also decide whether to grasping stabilize or not. In this work, we investigate the question of whether tactile information and object intrinsic property aid in predicting grasp outcomes within a multi-modal sensing framework that combines vision, tactile and object intrinsic property. To that end, we collected more than 2550 grasping trials using a 3-finger robot hand which mounted with multiple tactile sensors. We evaluated our multi-modal deep neural network models to directly predict grasp stability from either modality individually or multimodal modalities. Our experimental results indicate the visual combination of tactile readings and intrinsic properties of the object significantly improve grasping prediction performance.