{"title":"集成3D点云与多视点视频","authors":"Feng Chen, I. Cheng, A. Basu","doi":"10.1109/3DTV.2009.5069628","DOIUrl":null,"url":null,"abstract":"Multi-viewpoint video has recently gained significant attention in academic and commercial fields. In this work, we propose a new method for incorporating 3D point cloud models into multi-viewpoint video. First, we synthesize virtual multi-viewpoint video utilizing depth and texture maps of the input video. Then, we integrate 3D point cloud models with the resulting multi-viewpoint video generated in the first step by analyzing the depth information. As shown in our experiments, 3D point clouds can be seamlessly inserted into a multi-viewpoint video and realistic effect can be obtained. In addition, we compare the virtual viewpoint image generated by interpolating the two nearest neighbor cameras and by re-projecting the nearest camera.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Integrating 3D point clouds with multi-viewpoint video\",\"authors\":\"Feng Chen, I. Cheng, A. Basu\",\"doi\":\"10.1109/3DTV.2009.5069628\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multi-viewpoint video has recently gained significant attention in academic and commercial fields. In this work, we propose a new method for incorporating 3D point cloud models into multi-viewpoint video. First, we synthesize virtual multi-viewpoint video utilizing depth and texture maps of the input video. Then, we integrate 3D point cloud models with the resulting multi-viewpoint video generated in the first step by analyzing the depth information. As shown in our experiments, 3D point clouds can be seamlessly inserted into a multi-viewpoint video and realistic effect can be obtained. In addition, we compare the virtual viewpoint image generated by interpolating the two nearest neighbor cameras and by re-projecting the nearest camera.\",\"PeriodicalId\":230128,\"journal\":{\"name\":\"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2009-05-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/3DTV.2009.5069628\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DTV.2009.5069628","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Integrating 3D point clouds with multi-viewpoint video
Multi-viewpoint video has recently gained significant attention in academic and commercial fields. In this work, we propose a new method for incorporating 3D point cloud models into multi-viewpoint video. First, we synthesize virtual multi-viewpoint video utilizing depth and texture maps of the input video. Then, we integrate 3D point cloud models with the resulting multi-viewpoint video generated in the first step by analyzing the depth information. As shown in our experiments, 3D point clouds can be seamlessly inserted into a multi-viewpoint video and realistic effect can be obtained. In addition, we compare the virtual viewpoint image generated by interpolating the two nearest neighbor cameras and by re-projecting the nearest camera.