{"title":"使用视觉和触觉数据生成三维物体假设","authors":"M. Boshra, Hong Zhang","doi":"10.1109/IROS.1994.407407","DOIUrl":null,"url":null,"abstract":"Most existing 3-D object recognition/localization systems rely on a single type of sensory data, although several sensors may be available in a robot task to provide information about the objects to be recognized. In this paper, the authors present a technique to localize polyhedral objects by integrating visual and tactile data. It is assumed that visual data is provided by a monocular visual sensor, while tactile data by a planar-array tactile sensor in contact with the object to be localized. The authors focus on using tactile data in the hypothesis generation phase to reduce the requirements of visual features for localization to a V-junction only. The main concept of this technique is to compute a set of partial pose hypotheses off-line by utilizing tactile data, and then complement these partial hypotheses on-line using visual data. The technique presented is tested using simulated and real data.<<ETX>>","PeriodicalId":437805,"journal":{"name":"Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'94)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Use of visual and tactile data for generation of 3-D object hypotheses\",\"authors\":\"M. Boshra, Hong Zhang\",\"doi\":\"10.1109/IROS.1994.407407\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most existing 3-D object recognition/localization systems rely on a single type of sensory data, although several sensors may be available in a robot task to provide information about the objects to be recognized. In this paper, the authors present a technique to localize polyhedral objects by integrating visual and tactile data. It is assumed that visual data is provided by a monocular visual sensor, while tactile data by a planar-array tactile sensor in contact with the object to be localized. The authors focus on using tactile data in the hypothesis generation phase to reduce the requirements of visual features for localization to a V-junction only. The main concept of this technique is to compute a set of partial pose hypotheses off-line by utilizing tactile data, and then complement these partial hypotheses on-line using visual data. The technique presented is tested using simulated and real data.<<ETX>>\",\"PeriodicalId\":437805,\"journal\":{\"name\":\"Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'94)\",\"volume\":\"68 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1994-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'94)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IROS.1994.407407\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'94)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IROS.1994.407407","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Use of visual and tactile data for generation of 3-D object hypotheses
Most existing 3-D object recognition/localization systems rely on a single type of sensory data, although several sensors may be available in a robot task to provide information about the objects to be recognized. In this paper, the authors present a technique to localize polyhedral objects by integrating visual and tactile data. It is assumed that visual data is provided by a monocular visual sensor, while tactile data by a planar-array tactile sensor in contact with the object to be localized. The authors focus on using tactile data in the hypothesis generation phase to reduce the requirements of visual features for localization to a V-junction only. The main concept of this technique is to compute a set of partial pose hypotheses off-line by utilizing tactile data, and then complement these partial hypotheses on-line using visual data. The technique presented is tested using simulated and real data.<>