{"title":"n -眼视觉系统用于场景重建、运动估计和定位的理论精度分析","authors":"P. Firoozfam, S. Negahdaripour","doi":"10.1109/TDPVT.2004.1335409","DOIUrl":null,"url":null,"abstract":"Theoretical models are derived to analyze the accuracy of N-Ocular vision systems for scene reconstruction, motion estimation and self positioning. Covariance matrices are given to estimate the uncertainty bounds for the reconstructed points in 3D space, motion parameters, and 3D position of the vision system. Simulation results of various experiments, based on synthetic and real data acquired with a 12-camera stereo panoramic imaging system, are given to demonstrate the application of these models, as well as to evaluate the performance of the panoramic system for high-precision 3D mapping and positioning.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"Theoretical accuracy analysis of N-Ocular vision systems for scene reconstruction, motion estimation, and positioning\",\"authors\":\"P. Firoozfam, S. Negahdaripour\",\"doi\":\"10.1109/TDPVT.2004.1335409\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Theoretical models are derived to analyze the accuracy of N-Ocular vision systems for scene reconstruction, motion estimation and self positioning. Covariance matrices are given to estimate the uncertainty bounds for the reconstructed points in 3D space, motion parameters, and 3D position of the vision system. Simulation results of various experiments, based on synthetic and real data acquired with a 12-camera stereo panoramic imaging system, are given to demonstrate the application of these models, as well as to evaluate the performance of the panoramic system for high-precision 3D mapping and positioning.\",\"PeriodicalId\":191172,\"journal\":{\"name\":\"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2004-09-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TDPVT.2004.1335409\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TDPVT.2004.1335409","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Theoretical accuracy analysis of N-Ocular vision systems for scene reconstruction, motion estimation, and positioning
Theoretical models are derived to analyze the accuracy of N-Ocular vision systems for scene reconstruction, motion estimation and self positioning. Covariance matrices are given to estimate the uncertainty bounds for the reconstructed points in 3D space, motion parameters, and 3D position of the vision system. Simulation results of various experiments, based on synthetic and real data acquired with a 12-camera stereo panoramic imaging system, are given to demonstrate the application of these models, as well as to evaluate the performance of the panoramic system for high-precision 3D mapping and positioning.