Ju Shen, Wanxin Xu, Ying Luo, Po-Chang Su, S. Cheung
{"title":"宽基线RGB-D摄像机网络的外部标定","authors":"Ju Shen, Wanxin Xu, Ying Luo, Po-Chang Su, S. Cheung","doi":"10.1109/MMSP.2014.6958798","DOIUrl":null,"url":null,"abstract":"In the recent years, color and depth camera systems have attracted intensive attention because of its wide applications in image-based rendering, 3D model reconstruction, and human tracking and pose estimation. These applications often require multiple color and depth cameras to be placed with wide separation so as to capture the scene objects from different prospectives. The difference in modality and the wide baseline make calibration a challenging problem. In this paper, we present an algorithm that simultaneously and automatically calibrates the extrinsics across multiple color and depth cameras across the network. Rather than using the standard checkerboard, we use a sphere as a calibration object to identify the correspondences across different views. We experimentally demonstrate that our calibration framework can seamlessly integrate different views with wide baselines that outperforms other techniques in the literature.","PeriodicalId":164858,"journal":{"name":"2014 IEEE 16th International Workshop on Multimedia Signal Processing (MMSP)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":"{\"title\":\"Extrinsic calibration for wide-baseline RGB-D camera network\",\"authors\":\"Ju Shen, Wanxin Xu, Ying Luo, Po-Chang Su, S. Cheung\",\"doi\":\"10.1109/MMSP.2014.6958798\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the recent years, color and depth camera systems have attracted intensive attention because of its wide applications in image-based rendering, 3D model reconstruction, and human tracking and pose estimation. These applications often require multiple color and depth cameras to be placed with wide separation so as to capture the scene objects from different prospectives. The difference in modality and the wide baseline make calibration a challenging problem. In this paper, we present an algorithm that simultaneously and automatically calibrates the extrinsics across multiple color and depth cameras across the network. Rather than using the standard checkerboard, we use a sphere as a calibration object to identify the correspondences across different views. We experimentally demonstrate that our calibration framework can seamlessly integrate different views with wide baselines that outperforms other techniques in the literature.\",\"PeriodicalId\":164858,\"journal\":{\"name\":\"2014 IEEE 16th International Workshop on Multimedia Signal Processing (MMSP)\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-11-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"17\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE 16th International Workshop on Multimedia Signal Processing (MMSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MMSP.2014.6958798\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE 16th International Workshop on Multimedia Signal Processing (MMSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MMSP.2014.6958798","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Extrinsic calibration for wide-baseline RGB-D camera network
In the recent years, color and depth camera systems have attracted intensive attention because of its wide applications in image-based rendering, 3D model reconstruction, and human tracking and pose estimation. These applications often require multiple color and depth cameras to be placed with wide separation so as to capture the scene objects from different prospectives. The difference in modality and the wide baseline make calibration a challenging problem. In this paper, we present an algorithm that simultaneously and automatically calibrates the extrinsics across multiple color and depth cameras across the network. Rather than using the standard checkerboard, we use a sphere as a calibration object to identify the correspondences across different views. We experimentally demonstrate that our calibration framework can seamlessly integrate different views with wide baselines that outperforms other techniques in the literature.