2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video最新文献

筛选
英文 中文
Image-based three-dimensional free viewpoint video synthesis 基于图像的三维自由视点视频合成
D. Aliprandi, E. Piccinelli
{"title":"Image-based three-dimensional free viewpoint video synthesis","authors":"D. Aliprandi, E. Piccinelli","doi":"10.1109/3DTV.2009.5069637","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069637","url":null,"abstract":"In this paper we describe an image-based rendering chain that realizes the free viewpoint video functionality throughout the whole scene. We summarize the most common problems that may be encountered in such an application and provide different solutions to each core stage of the rendering process. Moreover, we propose a novel way to specify the virtual camera's path that moves throughout the scene and implement a procedure to render the 3D effect by exploiting the stereoscopy principle. Finally, we give some results about the objective quality of the synthesized views w.r.t. their original counterparts acquired from real viewpoints.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134539423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
3D holographic display with optically addressed spatial light modulator 具有光学寻址空间光调制器的三维全息显示器
Liang Xinan, Xu Xuewu, S. Solanki, Pan Yuechao, R. Bin Adrian Tanjung, Tan Chiwei, Xu Baoxi, C. Chong
{"title":"3D holographic display with optically addressed spatial light modulator","authors":"Liang Xinan, Xu Xuewu, S. Solanki, Pan Yuechao, R. Bin Adrian Tanjung, Tan Chiwei, Xu Baoxi, C. Chong","doi":"10.1109/3DTV.2009.5069618","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069618","url":null,"abstract":"We present a computer generated hologram (CGH) based setup for displaying 3D objects. An optically addressed spatial light modulator (OASLM) is used as a display device. By using the OASLM, the diffraction orders caused by the inter-pixel gap of an electrically addressed spatial light modulator (EASLM) can be eliminated. At the same time the viewing angle of reconstructed 3D objects can be enlarged by demagnifying the pixels of the EASLM from 36µm to 9µm through high resolution imaging optics. The holograms were computed based on a newly developed algorithm for the display.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"224 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132406372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Objective quality assessment method of stereo images 立体图像质量的客观评价方法
Jiachen Yang, Chunping Hou, Yuan Zhou, Zhuoyun Zhang, Jichang Guo
{"title":"Objective quality assessment method of stereo images","authors":"Jiachen Yang, Chunping Hou, Yuan Zhou, Zhuoyun Zhang, Jichang Guo","doi":"10.1109/3DTV.2009.5069615","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069615","url":null,"abstract":"Several metrics have been proposed in literature to assess the quality of 2D images, but the metrics devoted to quality assessment of stereoscopic images are very scarce. Therefore, in this paper, an objective assessment method is proposed to predict the quality level of stereoscopic images. This method assesses stereo images from the perspective of image quality and stereo sense. Experiments demonstrate that the objective assessment method the paper presented gets similar results with the general subjective assessment method. And the method is simple, rapid, convenient and practical.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133574977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 103
Simultaneous estimation of super-resolved depth and all-in-focus images from a plenoptic camera 同时估计超分辨深度和全焦图像从全光学相机
F. Pérez Nava, J. P. Luke
{"title":"Simultaneous estimation of super-resolved depth and all-in-focus images from a plenoptic camera","authors":"F. Pérez Nava, J. P. Luke","doi":"10.1109/3DTV.2009.5069675","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069675","url":null,"abstract":"This paper presents a new technique to simultaneously estimate the depth map and the all-in-focus image of a scene, both at super-resolution, from a plenoptic camera.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132261016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Integrating 3D point clouds with multi-viewpoint video 集成3D点云与多视点视频
Feng Chen, I. Cheng, A. Basu
{"title":"Integrating 3D point clouds with multi-viewpoint video","authors":"Feng Chen, I. Cheng, A. Basu","doi":"10.1109/3DTV.2009.5069628","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069628","url":null,"abstract":"Multi-viewpoint video has recently gained significant attention in academic and commercial fields. In this work, we propose a new method for incorporating 3D point cloud models into multi-viewpoint video. First, we synthesize virtual multi-viewpoint video utilizing depth and texture maps of the input video. Then, we integrate 3D point cloud models with the resulting multi-viewpoint video generated in the first step by analyzing the depth information. As shown in our experiments, 3D point clouds can be seamlessly inserted into a multi-viewpoint video and realistic effect can be obtained. In addition, we compare the virtual viewpoint image generated by interpolating the two nearest neighbor cameras and by re-projecting the nearest camera.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129520250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Optimal pixel aspect ratio for stereoscopic 3D displays under practical viewing conditions 实际观看条件下立体3D显示器的最佳像素宽高比
Hossein Azari, I. Cheng, A. Basu
{"title":"Optimal pixel aspect ratio for stereoscopic 3D displays under practical viewing conditions","authors":"Hossein Azari, I. Cheng, A. Basu","doi":"10.1109/3DTV.2009.5069666","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069666","url":null,"abstract":"In multiview 3DTVs the original 3D-scene is reconstructed based on the corresponding pixels of adjacent 2D views. For conventional 2D display the highest image quality is usually achieved by uniform distribution of pixels. However, recent studies on the 3D reconstruction process show that for a given total resolution, a non-uniform horizontally-finer resolution yields better visual experience on 3D displays. Unfortunately, none of these studies explicitly model practical viewing conditions, such as the role of the 3D display as a medium and behavior of the human eyes. In this paper the previous models are extended by incorporating these factors into the optimization process. Based on this extended formulation the optimal ratios are calculated for a few typical viewing configurations. Some supporting subjective studies are presented as well.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130041614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Question interface for 3D picture creation on an autostereoscopic digital picture frame 用于在自动立体数字相框上创建三维图像的问题接口
C. Varekamp, P. Vandewalle, Marc de Putter
{"title":"Question interface for 3D picture creation on an autostereoscopic digital picture frame","authors":"C. Varekamp, P. Vandewalle, Marc de Putter","doi":"10.1109/3DTV.2009.5069616","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069616","url":null,"abstract":"We propose an interface for creating a depth map for a 2D picture. The image and depth map can be used for 3D display on an autostereoscopic photo frame. Our new interface does not require the user to draw on the picture or point at an object in the picture. Instead, semantic questions are asked about a given indicated position in the picture. This semantic information is then translated automatically into a depth map.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124143303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Incremental-LDI for multi-view coding 用于多视图编码的增量ldi
V. Jantet, L. Morin, C. Guillemot
{"title":"Incremental-LDI for multi-view coding","authors":"V. Jantet, L. Morin, C. Guillemot","doi":"10.1109/3DTV.2009.5069647","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069647","url":null,"abstract":"This paper describes an Incremental algorithm for Layer Depth Image construction (I-LDI) from multi-view plus depth data sets. A solution to sampling artifacts is proposed, based on pixel interpolation (inpainting) restricted to isolated unknown pixels. A solution to ghosting artifacts is also proposed, based on a depth discontinuity detection, followed by a local foreground / background classification. We propose a formulation of warping equations which reduces time consumption, specifically for LDI warping. Tests on Breakdancers and Ballet MVD data sets show that extra layers in I-LDI contain only 10% of first layer pixels, compared to 50% for LDI. I-LDI Layers are also more compact, with a less spread pixel distribution, and thus easier to compress than LDI Visual rendering is of similar quality with I-LDI and LDI.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"33 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132063499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Generation of 3D-TV LDV-content with Time-Of-Flight Camera 使用飞行时间相机生成3d电视ldv内容
A. Frick, F. Kellner, B. Bartczak, R. Koch
{"title":"Generation of 3D-TV LDV-content with Time-Of-Flight Camera","authors":"A. Frick, F. Kellner, B. Bartczak, R. Koch","doi":"10.1109/3DTV.2009.5069624","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069624","url":null,"abstract":"In this paper we describe an approach for 3D-TV Layered Depth Video (LDV) - Content creation using a capturing system of four CCD - Cameras and Time-Of-Flight - Sensor (ToF - Camera). We demonstrate a whole video production chain, from calibration of the camera rig, to generation of reliable depth maps for a single view of one of the CCD - Cameras, using only estimated depth provided by the ToF - Camera. We additionally show that we are able to generate proper occlusion layers for LDV - Content through a straight forward approach based on depth background extrapolation and backward texture mapping.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"447 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117004261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Temporally consistent dense depth map estimation via Belief Propagation 基于信念传播的时间一致密集深度图估计
C. Çigla, A. Alatan
{"title":"Temporally consistent dense depth map estimation via Belief Propagation","authors":"C. Çigla, A. Alatan","doi":"10.1109/3DTV.2009.5069636","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069636","url":null,"abstract":"A method for estimating temporally and spatially consistent dense depth maps in multiple camera setups is presented which is important for reduction of perception artifacts in 3D displays. For this purpose, initially, depth estimation is performed for each camera with the piece-wise planarity assumption and Markov Random Field (MRF) based relaxation at each time instant independently. During the relaxation step, the consistency of depth maps for different cameras is also considered for the reliability of the models. Next, temporal consistency of the depth maps is achieved in two steps. In the first step, median filtering is applied for the static or background pixels, whose intensity levels are constant in time. Such an approach decreases the number of inconsistent depth values significantly. The second step considers the moving pixels and MRF formulation is updated by the additional information from the depth maps of the consequent frames through motion compensation. For the solution of the MRF formulation for both spatial and temporal consistency, Belief Propagation approach is utilized. The experiments indicate that the proposed method provide reliable dense depth map estimates both in spatial and temporal domains.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126553635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信