{"title":"KinectFusion rapid 3D reconstruction and interaction with Microsoft Kinect","authors":"D. Molyneaux","doi":"10.1145/2282338.2282342","DOIUrl":null,"url":null,"abstract":"Using a Microsoft Kinect camera, the KinectFusion system enables a low-cost way for a user to digitally reconstruct a whole room and its contents within seconds. As the space is explored, new views of the arbitrary scene and objects are revealed and these are fused into a single 3D model. The 6DoF pose of the camera is tracked in real-time using a method which directly uses the point-based depth data of Kinect, and requires no feature extraction or feature tracking. Once the 3D pose of the camera is known, each depth measurement from the sensor can be integrated into a volumetric representation. Kinect Fusion enables many Augmented Reality applications and 3D interaction such as multi-touch on arbitrary shaped surfaces.","PeriodicalId":92512,"journal":{"name":"FDG : proceedings of the International Conference on Foundations of Digital Games. International Conference on the Foundations of Digital Games","volume":"1 1","pages":"3"},"PeriodicalIF":0.0000,"publicationDate":"2012-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"FDG : proceedings of the International Conference on Foundations of Digital Games. International Conference on the Foundations of Digital Games","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2282338.2282342","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Using a Microsoft Kinect camera, the KinectFusion system enables a low-cost way for a user to digitally reconstruct a whole room and its contents within seconds. As the space is explored, new views of the arbitrary scene and objects are revealed and these are fused into a single 3D model. The 6DoF pose of the camera is tracked in real-time using a method which directly uses the point-based depth data of Kinect, and requires no feature extraction or feature tracking. Once the 3D pose of the camera is known, each depth measurement from the sensor can be integrated into a volumetric representation. Kinect Fusion enables many Augmented Reality applications and 3D interaction such as multi-touch on arbitrary shaped surfaces.