{"title":"Multi-sensory fusion and model-based recognition of complex objects","authors":"M. Devy, R. Boumaza","doi":"10.1109/MFI.1994.398434","DOIUrl":null,"url":null,"abstract":"Perception with complementary sensors like a color camera and a laser range finder, make easier the recognition of objects in a 3D scene. This paper copes with the recognition of non-polyhedral objects, described each one by a REV graph and an aspect table, required to afford reasoning about visibility. The authors focus on the relations between segmentation and recognition strategies. A set of segmentation operators, executed by logical sensors, can be requested with respect to the state of the recognition task, in order to extract the more suitable set of features from the sensory data; if needed, the fusion of perceptual data can provide the more accurate estimates of the perceived geometric features. The control module of the recognition task, follows a classical \"hypothesize and test\" paradigm; this paper concerns only the hypothesis generation and verification, after one acquisition. Recognition strategies could be compiled off line, according to the object and the sensor models. The authors show how such strategies allow one to limit complexity of the segmentation and recognition processes; experimental results on real perceptual data, validate this method.<<ETX>>","PeriodicalId":133630,"journal":{"name":"Proceedings of 1994 IEEE International Conference on MFI '94. Multisensor Fusion and Integration for Intelligent Systems","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of 1994 IEEE International Conference on MFI '94. Multisensor Fusion and Integration for Intelligent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MFI.1994.398434","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Perception with complementary sensors like a color camera and a laser range finder, make easier the recognition of objects in a 3D scene. This paper copes with the recognition of non-polyhedral objects, described each one by a REV graph and an aspect table, required to afford reasoning about visibility. The authors focus on the relations between segmentation and recognition strategies. A set of segmentation operators, executed by logical sensors, can be requested with respect to the state of the recognition task, in order to extract the more suitable set of features from the sensory data; if needed, the fusion of perceptual data can provide the more accurate estimates of the perceived geometric features. The control module of the recognition task, follows a classical "hypothesize and test" paradigm; this paper concerns only the hypothesis generation and verification, after one acquisition. Recognition strategies could be compiled off line, according to the object and the sensor models. The authors show how such strategies allow one to limit complexity of the segmentation and recognition processes; experimental results on real perceptual data, validate this method.<>