{"title":"Automatic feature correspondence for scene reconstruction","authors":"Philip W. Smith, Mark D. Elstrom","doi":"10.1109/IM.1999.805378","DOIUrl":null,"url":null,"abstract":"To construct complete three-dimensional representations from multiple range images or to employ color images as texture maps for surface models, the relative positions of the sensors used to capture the data must be known. Most data-driven methods for pose estimation require either an accurate initial estimation of the relative orientation be specified or a corresponding set of features be extracted from images. The autonomous identification of corresponding feature positions thus represents the major difficulty in creating completely automated registration and reconstruction systems that place no restrictions on relative sensor positions. In this paper, an automated feature correspondence technique, specifically designed for the task of multi-modal view registration is presented which requires no initial pose estimates or geometric matching constraints. Both photo-realistic and 3-D scene models are presented that were constructed autonomously by systems employing the described matching algorithm.","PeriodicalId":110347,"journal":{"name":"Second International Conference on 3-D Digital Imaging and Modeling (Cat. No.PR00062)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1999-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Second International Conference on 3-D Digital Imaging and Modeling (Cat. No.PR00062)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IM.1999.805378","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
To construct complete three-dimensional representations from multiple range images or to employ color images as texture maps for surface models, the relative positions of the sensors used to capture the data must be known. Most data-driven methods for pose estimation require either an accurate initial estimation of the relative orientation be specified or a corresponding set of features be extracted from images. The autonomous identification of corresponding feature positions thus represents the major difficulty in creating completely automated registration and reconstruction systems that place no restrictions on relative sensor positions. In this paper, an automated feature correspondence technique, specifically designed for the task of multi-modal view registration is presented which requires no initial pose estimates or geometric matching constraints. Both photo-realistic and 3-D scene models are presented that were constructed autonomously by systems employing the described matching algorithm.