{"title":"Synchronization and Calibration of a Camera Network for 3D Event Reconstruction from Live Video","authors":"Sudipta N. Sinha, M. Pollefeys","doi":"10.1109/CVPR.2005.338","DOIUrl":null,"url":null,"abstract":"We present an approach for automatic reconstruction of a dynamic event using multiple video cameras recording from different viewpoints. Our approach recovers all the necessary information by analyzing the motion of the silhouettes in the multiple video streams. The first step consists of computing the calibration and synchronization for pairs of cameras. We compute the temporal offset and epipolar geometry using an efficient RANSAC-based algorithm to search for the epipoles as well as for robustness. In the next stage the calibration and synchronization for the complete camera network is recovered and then refined through maximum likelihood estimation. Finally, a visual hull algorithm is used to the recover the dynamic shape of the observed object.","PeriodicalId":89346,"journal":{"name":"Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops","volume":"62 1","pages":"1196"},"PeriodicalIF":0.0000,"publicationDate":"2005-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPR.2005.338","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
We present an approach for automatic reconstruction of a dynamic event using multiple video cameras recording from different viewpoints. Our approach recovers all the necessary information by analyzing the motion of the silhouettes in the multiple video streams. The first step consists of computing the calibration and synchronization for pairs of cameras. We compute the temporal offset and epipolar geometry using an efficient RANSAC-based algorithm to search for the epipoles as well as for robustness. In the next stage the calibration and synchronization for the complete camera network is recovered and then refined through maximum likelihood estimation. Finally, a visual hull algorithm is used to the recover the dynamic shape of the observed object.