{"title":"Image-based three-dimensional free viewpoint video synthesis","authors":"D. Aliprandi, E. Piccinelli","doi":"10.1109/3DTV.2009.5069637","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069637","url":null,"abstract":"In this paper we describe an image-based rendering chain that realizes the free viewpoint video functionality throughout the whole scene. We summarize the most common problems that may be encountered in such an application and provide different solutions to each core stage of the rendering process. Moreover, we propose a novel way to specify the virtual camera's path that moves throughout the scene and implement a procedure to render the 3D effect by exploiting the stereoscopy principle. Finally, we give some results about the objective quality of the synthesized views w.r.t. their original counterparts acquired from real viewpoints.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134539423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liang Xinan, Xu Xuewu, S. Solanki, Pan Yuechao, R. Bin Adrian Tanjung, Tan Chiwei, Xu Baoxi, C. Chong
{"title":"3D holographic display with optically addressed spatial light modulator","authors":"Liang Xinan, Xu Xuewu, S. Solanki, Pan Yuechao, R. Bin Adrian Tanjung, Tan Chiwei, Xu Baoxi, C. Chong","doi":"10.1109/3DTV.2009.5069618","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069618","url":null,"abstract":"We present a computer generated hologram (CGH) based setup for displaying 3D objects. An optically addressed spatial light modulator (OASLM) is used as a display device. By using the OASLM, the diffraction orders caused by the inter-pixel gap of an electrically addressed spatial light modulator (EASLM) can be eliminated. At the same time the viewing angle of reconstructed 3D objects can be enlarged by demagnifying the pixels of the EASLM from 36µm to 9µm through high resolution imaging optics. The holograms were computed based on a newly developed algorithm for the display.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"224 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132406372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Objective quality assessment method of stereo images","authors":"Jiachen Yang, Chunping Hou, Yuan Zhou, Zhuoyun Zhang, Jichang Guo","doi":"10.1109/3DTV.2009.5069615","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069615","url":null,"abstract":"Several metrics have been proposed in literature to assess the quality of 2D images, but the metrics devoted to quality assessment of stereoscopic images are very scarce. Therefore, in this paper, an objective assessment method is proposed to predict the quality level of stereoscopic images. This method assesses stereo images from the perspective of image quality and stereo sense. Experiments demonstrate that the objective assessment method the paper presented gets similar results with the general subjective assessment method. And the method is simple, rapid, convenient and practical.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133574977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simultaneous estimation of super-resolved depth and all-in-focus images from a plenoptic camera","authors":"F. Pérez Nava, J. P. Luke","doi":"10.1109/3DTV.2009.5069675","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069675","url":null,"abstract":"This paper presents a new technique to simultaneously estimate the depth map and the all-in-focus image of a scene, both at super-resolution, from a plenoptic camera.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132261016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrating 3D point clouds with multi-viewpoint video","authors":"Feng Chen, I. Cheng, A. Basu","doi":"10.1109/3DTV.2009.5069628","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069628","url":null,"abstract":"Multi-viewpoint video has recently gained significant attention in academic and commercial fields. In this work, we propose a new method for incorporating 3D point cloud models into multi-viewpoint video. First, we synthesize virtual multi-viewpoint video utilizing depth and texture maps of the input video. Then, we integrate 3D point cloud models with the resulting multi-viewpoint video generated in the first step by analyzing the depth information. As shown in our experiments, 3D point clouds can be seamlessly inserted into a multi-viewpoint video and realistic effect can be obtained. In addition, we compare the virtual viewpoint image generated by interpolating the two nearest neighbor cameras and by re-projecting the nearest camera.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129520250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimal pixel aspect ratio for stereoscopic 3D displays under practical viewing conditions","authors":"Hossein Azari, I. Cheng, A. Basu","doi":"10.1109/3DTV.2009.5069666","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069666","url":null,"abstract":"In multiview 3DTVs the original 3D-scene is reconstructed based on the corresponding pixels of adjacent 2D views. For conventional 2D display the highest image quality is usually achieved by uniform distribution of pixels. However, recent studies on the 3D reconstruction process show that for a given total resolution, a non-uniform horizontally-finer resolution yields better visual experience on 3D displays. Unfortunately, none of these studies explicitly model practical viewing conditions, such as the role of the 3D display as a medium and behavior of the human eyes. In this paper the previous models are extended by incorporating these factors into the optimization process. Based on this extended formulation the optimal ratios are calculated for a few typical viewing configurations. Some supporting subjective studies are presented as well.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130041614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Question interface for 3D picture creation on an autostereoscopic digital picture frame","authors":"C. Varekamp, P. Vandewalle, Marc de Putter","doi":"10.1109/3DTV.2009.5069616","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069616","url":null,"abstract":"We propose an interface for creating a depth map for a 2D picture. The image and depth map can be used for 3D display on an autostereoscopic photo frame. Our new interface does not require the user to draw on the picture or point at an object in the picture. Instead, semantic questions are asked about a given indicated position in the picture. This semantic information is then translated automatically into a depth map.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124143303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Incremental-LDI for multi-view coding","authors":"V. Jantet, L. Morin, C. Guillemot","doi":"10.1109/3DTV.2009.5069647","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069647","url":null,"abstract":"This paper describes an Incremental algorithm for Layer Depth Image construction (I-LDI) from multi-view plus depth data sets. A solution to sampling artifacts is proposed, based on pixel interpolation (inpainting) restricted to isolated unknown pixels. A solution to ghosting artifacts is also proposed, based on a depth discontinuity detection, followed by a local foreground / background classification. We propose a formulation of warping equations which reduces time consumption, specifically for LDI warping. Tests on Breakdancers and Ballet MVD data sets show that extra layers in I-LDI contain only 10% of first layer pixels, compared to 50% for LDI. I-LDI Layers are also more compact, with a less spread pixel distribution, and thus easier to compress than LDI Visual rendering is of similar quality with I-LDI and LDI.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"33 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132063499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generation of 3D-TV LDV-content with Time-Of-Flight Camera","authors":"A. Frick, F. Kellner, B. Bartczak, R. Koch","doi":"10.1109/3DTV.2009.5069624","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069624","url":null,"abstract":"In this paper we describe an approach for 3D-TV Layered Depth Video (LDV) - Content creation using a capturing system of four CCD - Cameras and Time-Of-Flight - Sensor (ToF - Camera). We demonstrate a whole video production chain, from calibration of the camera rig, to generation of reliable depth maps for a single view of one of the CCD - Cameras, using only estimated depth provided by the ToF - Camera. We additionally show that we are able to generate proper occlusion layers for LDV - Content through a straight forward approach based on depth background extrapolation and backward texture mapping.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"447 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117004261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Temporally consistent dense depth map estimation via Belief Propagation","authors":"C. Çigla, A. Alatan","doi":"10.1109/3DTV.2009.5069636","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069636","url":null,"abstract":"A method for estimating temporally and spatially consistent dense depth maps in multiple camera setups is presented which is important for reduction of perception artifacts in 3D displays. For this purpose, initially, depth estimation is performed for each camera with the piece-wise planarity assumption and Markov Random Field (MRF) based relaxation at each time instant independently. During the relaxation step, the consistency of depth maps for different cameras is also considered for the reliability of the models. Next, temporal consistency of the depth maps is achieved in two steps. In the first step, median filtering is applied for the static or background pixels, whose intensity levels are constant in time. Such an approach decreases the number of inconsistent depth values significantly. The second step considers the moving pixels and MRF formulation is updated by the additional information from the depth maps of the consequent frames through motion compensation. For the solution of the MRF formulation for both spatial and temporal consistency, Belief Propagation approach is utilized. The experiments indicate that the proposed method provide reliable dense depth map estimates both in spatial and temporal domains.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126553635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}