{"title":"bootstrap实时自我运动估计和场景建模","authors":"Xiang Zhang, Yakup Genç","doi":"10.1109/3DIM.2005.25","DOIUrl":null,"url":null,"abstract":"Estimating the motion of a moving camera in an unknown environment is essential for a number of applications ranging from as-built reconstruction to augmented reality. It is a challenging problem especially when real-time performance is required. Our approach is to estimate the camera motion while reconstructing the shape and appearance of the most salient visual features in the scene. In our 3D reconstruction process, correspondences are obtained by tracking the visual features from frame to frame with optical flow tracking. Optical-flow-based tracking methods have limitations in tracking the salient features. Often larger translational motions and even moderate rotational motions can result in drifts. We propose to augment flow-based tracking by building a landmark representation around reliably reconstructed features. A planar patch around the reconstructed feature point provides matching information that prevents drifts in flow-based feature tracking and allows establishment of correspondences across the frames with large baselines. Selective and periodic such correspondence mappings drastically improve scene and motion reconstruction while adhering to the real-time requirements. The method is experimentally tested to be both accurate and computational efficient.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Bootstrapped real-time ego motion estimation and scene modeling\",\"authors\":\"Xiang Zhang, Yakup Genç\",\"doi\":\"10.1109/3DIM.2005.25\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Estimating the motion of a moving camera in an unknown environment is essential for a number of applications ranging from as-built reconstruction to augmented reality. It is a challenging problem especially when real-time performance is required. Our approach is to estimate the camera motion while reconstructing the shape and appearance of the most salient visual features in the scene. In our 3D reconstruction process, correspondences are obtained by tracking the visual features from frame to frame with optical flow tracking. Optical-flow-based tracking methods have limitations in tracking the salient features. Often larger translational motions and even moderate rotational motions can result in drifts. We propose to augment flow-based tracking by building a landmark representation around reliably reconstructed features. A planar patch around the reconstructed feature point provides matching information that prevents drifts in flow-based feature tracking and allows establishment of correspondences across the frames with large baselines. Selective and periodic such correspondence mappings drastically improve scene and motion reconstruction while adhering to the real-time requirements. The method is experimentally tested to be both accurate and computational efficient.\",\"PeriodicalId\":170883,\"journal\":{\"name\":\"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-06-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/3DIM.2005.25\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DIM.2005.25","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Bootstrapped real-time ego motion estimation and scene modeling
Estimating the motion of a moving camera in an unknown environment is essential for a number of applications ranging from as-built reconstruction to augmented reality. It is a challenging problem especially when real-time performance is required. Our approach is to estimate the camera motion while reconstructing the shape and appearance of the most salient visual features in the scene. In our 3D reconstruction process, correspondences are obtained by tracking the visual features from frame to frame with optical flow tracking. Optical-flow-based tracking methods have limitations in tracking the salient features. Often larger translational motions and even moderate rotational motions can result in drifts. We propose to augment flow-based tracking by building a landmark representation around reliably reconstructed features. A planar patch around the reconstructed feature point provides matching information that prevents drifts in flow-based feature tracking and allows establishment of correspondences across the frames with large baselines. Selective and periodic such correspondence mappings drastically improve scene and motion reconstruction while adhering to the real-time requirements. The method is experimentally tested to be both accurate and computational efficient.