2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)最新文献

筛选
英文 中文
Cascaded quantization based progressive 3D mesh compression 基于级联量化的渐进式三维网格压缩
Lei Zhang, Xiangyang Ji, Qionghai Dai, Naiyao Zhang
{"title":"Cascaded quantization based progressive 3D mesh compression","authors":"Lei Zhang, Xiangyang Ji, Qionghai Dai, Naiyao Zhang","doi":"10.1109/3DTV.2011.5877173","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877173","url":null,"abstract":"We proposed an efficient progressive 3D mesh compression method supporting flexible quality scalability. In the proposed method, the mesh geometry prediction residuals are partitioned into a number of iterative layers. Each iterative layer is split into several quality layers using cascaded quantization and then encoded by context adaptive arithmetic codec (CABAC). All the quality layers are encoded and transmitted independently to enable better error resilience. To achieve better rate-distortion performance, the quantization parameter of the first quality layer is determined by the importance of the corresponding iterative layer. Simulation results demonstrate that the proposed method is able to provide better compression performance compared to the state-of-the-art coders.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122689504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel view synthesis based on depth map layers representation 基于深度图层表示的新型视图合成
Nurulfajar Abd Manap, J. Soraghan
{"title":"Novel view synthesis based on depth map layers representation","authors":"Nurulfajar Abd Manap, J. Soraghan","doi":"10.1109/3DTV.2011.5877181","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877181","url":null,"abstract":"This paper presents a method that jointly performs stereo matching and inter-view interpolation to obtain the depth map and virtual view image. A novel view synthesis method based on depth map layers representation of the stereo image pairs is proposed. The main idea of this approach is to separate the depth map into several layers of depth based on the disparity distance of the corresponding points. The novel view synthesis can be interpolated independently to each layer of depth by masking the particular depth layer. The final novel view synthesis obtained with the entire layers flattens into a single layer. Since the image view synthesis is performed in separate layers, the extracted new virtual object can be superimposed onto another 3D scene. The method is useful for free viewpoint video application with a small number of camera configurations. Based on the experimental results, it shows that the algorithm could improve the efficiency of finding the depth map and to synthesis the new virtual view images.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121621002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Projector domain phase unwrapping in a structured light system with stereo cameras 立体摄像机结构光系统中的投影仪域相位展开
Ricardo R. Garcia, A. Zakhor
{"title":"Projector domain phase unwrapping in a structured light system with stereo cameras","authors":"Ricardo R. Garcia, A. Zakhor","doi":"10.1109/3DTV.2011.5877215","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877215","url":null,"abstract":"Phase-shifted sinusoids are commonly used as projection patterns in structured light systems consisting of projectors and cameras. They require few image captures per 3D reconstruction and have low decoding complexity. Recently, structured light systems with a projector and a pair of stereo cameras have been used in order to overcome the traditional phase discontinuity problem and allow for the reconstruction of scenes with multiple objects. In this paper, we propose a new approach to the phase unwrapping process in such systems. Rather than iterating through all pixels in the two cameras to determine the global phase of each pixel, we iterate through the projector pixels to solve for correspondences between the two camera views. An energy minimization framework is applied to these initial estimated correspondences to enforce smoothness and to fill in missing pixels. Unlike existing approaches, our method allows simultaneous unwrapping of both camera images and enforces consistency across them. We demonstrate the effectiveness of our approach experimentally on a few scenes.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132700902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Tools for 3D-TV programme production 3d电视节目制作工具
O. Grau, Marcus Muller, Josef Kluger
{"title":"Tools for 3D-TV programme production","authors":"O. Grau, Marcus Muller, Josef Kluger","doi":"10.1109/3DTV.2011.5877230","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877230","url":null,"abstract":"This contribution discusses tools for the production of 3D-TV programmes as developed and tested in the 3D4YOU project. The project looked in particular into image-plus-depth based formats and their integration into a 3D-TV production chain. This contribution focuses on requirements and production approaches for selected programme genres and describes examples of on-set and post-production tools for capture and generation of depth information.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131509052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Light engine and optics for HELIUM3D auto-stereoscopic laser scanning display 用于HELIUM3D自动立体激光扫描显示器的光引擎和光学系统
K. Akşit, S. Olcer, E. Erden, V. C. Kishore, H. Urey, E. Willman, H. Baghsiahi, S. Day, David R. Selviah, F. Aníbal Fernández, P. Surman
{"title":"Light engine and optics for HELIUM3D auto-stereoscopic laser scanning display","authors":"K. Akşit, S. Olcer, E. Erden, V. C. Kishore, H. Urey, E. Willman, H. Baghsiahi, S. Day, David R. Selviah, F. Aníbal Fernández, P. Surman","doi":"10.1109/3DTV.2011.5877226","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877226","url":null,"abstract":"This paper presents a laser-based auto-stereoscopic 3D display technique and a prototype utilizing a dual projector light engine. The solution described is able to form dynamic exit pupils under the control of a multi-user head-tracker. A prototype completed recently is able to provide a glasses-free solution for a single user at a fixed position. At the end of the prototyping phase it is expected to enable a multiple user interface with an integration of the pupil tracker and the spatial light modulator.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125253729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Spatial 3D imaging by synthetic and digitized holography 空间三维成像的合成和数字化全息
Y. Arima, K. Matsushima, S. Nakahara
{"title":"Spatial 3D imaging by synthetic and digitized holography","authors":"Y. Arima, K. Matsushima, S. Nakahara","doi":"10.1109/3DTV.2011.5877174","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877174","url":null,"abstract":"A novel method named digitized holography is proposed for 3D display systems. This is the technique replacing the whole process of classical holography with digital processing of optical wave-fields. The digitized holography allows us to edit holograms and reconstruct spatial 3D images including real-existent objects and CG-modeled virtual objects.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125447894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Rendering multi-view plus depth data on light-field displays 在光场显示器上渲染多视图和深度数据
Alexandre Ouazan, P. Kovács, T. Balogh, A. Barsi
{"title":"Rendering multi-view plus depth data on light-field displays","authors":"Alexandre Ouazan, P. Kovács, T. Balogh, A. Barsi","doi":"10.1109/3DTV.2011.5877220","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877220","url":null,"abstract":"This paper presents an approach for rendering heavily extrapolated novel views to be used as input for light-field displays. This view generation method builds on a combination and enhancement of existing methods. The interpolation quality is assured by detecting and keeping the most reliable gap area information from the content using depth layers. Concerning the extrapolation process, which is the most important part of this paper, we implemented an algorithm that prefers isophotes lines in order to reconstruct objects and patterns using gradient filling and Poisson reconstruction. Using the algorithms described, it is possible to generate wide baseline light field data from Multi-View plus Depth (MVD) data of moderate baseline. The approach is demonstrated by generating interpolated and extrapolated views for feeding a HoloVizio large-scale display with captured video data.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"IA-15 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121001910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Flexible openCL accelerated disparity estimation for video communication applications 灵活的openCL加速视差估计视频通信应用
C. Weigel, N. Treutner
{"title":"Flexible openCL accelerated disparity estimation for video communication applications","authors":"C. Weigel, N. Treutner","doi":"10.1109/3DTV.2011.5877207","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877207","url":null,"abstract":"Due to widespread broadband connections in normal households, the use of video chats via Internet is no longer limited to business meetings. However, the camera configuration usually makes it impossible to achieve direct eye contact between the conversational partners. This effect can be compensated using virtual view synthesis methods based on disparity maps. The virtual camera is positioned “behind” the communications windows and thus re-establishes the eye-contact. Obtaining a good disparity map is still a challenging problem and, with respect to video communication, must perform at interactive frame rates. In this paper we present optimized algorithms for disparity estimation that run in near real time. Recent developments in the consumer-hardware industry allow the implementation of complex algorithms for eye gaze correction, which can be used with relatively inexpensive out-of-the-box components. We employ the newly introduced OpenCL Framework and present an implementation of several optimized algorithms on a Graphics Processing Unit (GPU). Our implementation supports different methods for cost-estimation and aggregation, which we can combine flexibly. We present a method to efficiently implement a dynamic programming approach on the GPU. Our contribution makes it possible to interactively change parameters of the algorithms and get instant visual feedback which is crucial in algorithm development and parameter tuning. We also show first results of virtual views that re-establish the eye contact.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127038211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Immersive 3D media networking: Challenges and solutions 沉浸式3D媒体网络:挑战与解决方案
S. Worrall
{"title":"Immersive 3D media networking: Challenges and solutions","authors":"S. Worrall","doi":"10.1109/3DTV.2011.5877151","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877151","url":null,"abstract":"Recent years have seen a surge in interest in 3D entertainment. Cinema goers have flocked to view 3D movies in large numbers, which has encouraged broadcasters to introduce 3DTV channels. At the same time, advances in display technology, have resulted in more affordable stereoscopic displays being made available to the consumer. While delivering stereoscopic 3D video to the general publics' homes might be considered to be relatively straightforward task, providing more immersive multi-view experiences remains a significant challenge. The main challenge arises from the amount of data that needs to be transferred from content creator to content consumer. Some advanced displays require as many as sixty four views, which cannot be delivered by existing media networking technologies and architectures. This tutorial will examine the key challenges presented in 3D media networking, and will discuss some of the potential solutions being worked on by research groups around the world. In particular, we focus on proposals to combine traditional broadcast networks with Peer-to-Peer (P2P) media overlays, leveraging the latest research results on Quality of Experience and visual saliency to optimize the system, enabling it to cope better with issues such as bandwidth throttling of P2P content by Internet Service Providers (ISPs).","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"7 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114009975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized contrast reduction for crosstalk cancellation in 3D displays 优化了3D显示器中串音消除的对比度降低
C. Doutre, P. Nasiopoulos
{"title":"Optimized contrast reduction for crosstalk cancellation in 3D displays","authors":"C. Doutre, P. Nasiopoulos","doi":"10.1109/3DTV.2011.5877186","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877186","url":null,"abstract":"Subtractive crosstalk cancelation is an effective way to reduce the appearance of ghosting in 3D displays. However, effective cancelation requires the black level of the input images to be raised above zero, which reduces the image contrast and visual quality. Previous methods for selecting the raised black level do not consider the image content; they are either based on the worst case or they do not guarantee complete crosstalk cancelation. Previous methods also scale the red, green and blue channels independently, which results in images with washed out colors. This paper provides two contributions; first we derive the minimum amount that the black level has to be raised when using linear scaling in RGB space to ensure crosstalk can be fully cancelled out for a particular image. Second we propose that instead of scaling the images in RGB space, to scale the luma channel in YCbCr color space while keeping the chroma values constant to better preserve color. We also derive the minimum amount that the luma range has to be compressed to ensure that crosstalk can be fully canceled out. Experimental results show that our methods produce images with better color and contrast compared to scaling the RGB channels based on the worst case, while still guaranteeing crosstalk can be fully canceled out.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133231260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信