{"title":"Experimental studies on minimal representation multisensor fusion","authors":"R. Joshi, A. Sanderson","doi":"10.1109/ICAR.1997.620244","DOIUrl":null,"url":null,"abstract":"We describe laboratory experiments, in which tactile data obtained from the finger-tips of a robot hand, while it is holding an object in front of a calibrated camera, is fused with the vision data from the camera, to determine the object identity, pose, and the touch and vision data correspondences. The touch data is incomplete due to required hand configurations, while nearly half of the vision data are spurious due to the presence of the hand in the image. Using either sensor alone results in ambiguous or incorrect interpretations. A minimal representation size framework is used to formulate the multisensor fusion problem, and can automatically select the object class, correspondence (data subsamples), and pose parameters. The experiments demonstrate that it consistently finds the correct interpretation, and is a practical method for multisensor fusion and model selection.","PeriodicalId":228876,"journal":{"name":"1997 8th International Conference on Advanced Robotics. Proceedings. ICAR'97","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1997-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"1997 8th International Conference on Advanced Robotics. Proceedings. ICAR'97","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAR.1997.620244","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
We describe laboratory experiments, in which tactile data obtained from the finger-tips of a robot hand, while it is holding an object in front of a calibrated camera, is fused with the vision data from the camera, to determine the object identity, pose, and the touch and vision data correspondences. The touch data is incomplete due to required hand configurations, while nearly half of the vision data are spurious due to the presence of the hand in the image. Using either sensor alone results in ambiguous or incorrect interpretations. A minimal representation size framework is used to formulate the multisensor fusion problem, and can automatically select the object class, correspondence (data subsamples), and pose parameters. The experiments demonstrate that it consistently finds the correct interpretation, and is a practical method for multisensor fusion and model selection.