Afsaneh Rafighi, S. Seifi, Oscar E. Meruvia Pastor
{"title":"Automatic and adaptable registration of live RGBD video streams","authors":"Afsaneh Rafighi, S. Seifi, Oscar E. Meruvia Pastor","doi":"10.1145/2822013.2822027","DOIUrl":null,"url":null,"abstract":"We introduce DeReEs-4V, an algorithm that receives two separate RGBD video streams and automatically produces a unified scene through RGBD registration in a few seconds. The motivation behind the solution presented here is to allow game players to place the depth-sensing cameras at arbitrary locations to capture any scene where there is some partial overlap between the parts of the scene captured by the sensors. A typical way to combine partially overlapping views from multiple cameras is through visual calibration using external markers within the field of view of both cameras. Calibration can be time consuming and may require fine tuning, interrupting gameplay. If the cameras are even slightly moved or bumped into, the calibration process typically needs to be repeated from scratch. In this article we demonstrate how RGBD registration can be used to automatically find a 3D viewing transformation to match the view of one camera with respect to the other without calibration while the system is running. To validate this approach, a comparison of our method against standard checkerboard target calibration is provided, with a thorough examination of the system performance under different scenarios. The system presented supports any application that might benefit from a wider operational field-of-view video capture. Our results show that the system is robust to camera movements while simultaneously capturing and registering live point clouds from two depth-sensing cameras.","PeriodicalId":222258,"journal":{"name":"Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2822013.2822027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
We introduce DeReEs-4V, an algorithm that receives two separate RGBD video streams and automatically produces a unified scene through RGBD registration in a few seconds. The motivation behind the solution presented here is to allow game players to place the depth-sensing cameras at arbitrary locations to capture any scene where there is some partial overlap between the parts of the scene captured by the sensors. A typical way to combine partially overlapping views from multiple cameras is through visual calibration using external markers within the field of view of both cameras. Calibration can be time consuming and may require fine tuning, interrupting gameplay. If the cameras are even slightly moved or bumped into, the calibration process typically needs to be repeated from scratch. In this article we demonstrate how RGBD registration can be used to automatically find a 3D viewing transformation to match the view of one camera with respect to the other without calibration while the system is running. To validate this approach, a comparison of our method against standard checkerboard target calibration is provided, with a thorough examination of the system performance under different scenarios. The system presented supports any application that might benefit from a wider operational field-of-view video capture. Our results show that the system is robust to camera movements while simultaneously capturing and registering live point clouds from two depth-sensing cameras.