2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video最新文献

筛选
英文 中文
Monitoring large volumes of interest by using voxel visibility 通过使用体素可见性来监控大量感兴趣的内容
Diego Ruiz, K. Hagihara, B. Macq
{"title":"Monitoring large volumes of interest by using voxel visibility","authors":"Diego Ruiz, K. Hagihara, B. Macq","doi":"10.1109/3DTV.2009.5069667","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069667","url":null,"abstract":"Classical visual hull implementations are fonded on the hypothesis that the object is entirely visible by all the cameras. This limits the size of the volume to monitor. By adding visibility to the classical occupancy description, we limit the impact of each camera to the sub-volume of interest viewed by it. Different parts of the volume of interest are reconstructed by different sub-groups of cameras. Within a distributed reconstruction system, the volume of interest is no more constrained by camera placement but by the cost of the equipment and the latency of the system. We demonstrate the usage of visibility with a real time system using pinhole cameras and with sequences acquired on board of a moving train with fish eye lenses.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"2011 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125638487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D content generation for autostereoscopic displays 用于自动立体显示的3D内容生成
K. Dimitropoulos, T. Semertzidis, N. Grammalidis
{"title":"3D content generation for autostereoscopic displays","authors":"K. Dimitropoulos, T. Semertzidis, N. Grammalidis","doi":"10.1109/3DTV.2009.5069642","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069642","url":null,"abstract":"Content generation is one of the most critical issues for the growth of 3DTV services in future. New autostereoscopic displays, such as the Philips Wow® 3D display have significant advantages, including improved 3D viewing experience, wider viewing angles, multiple viewers and no need for special glasses. However, due to their content formatting requirements (2D + depth), live-action content is much more difficult to create. In this paper a new approach for 3D content generation is proposed, by integrating an existing state-of-the-art MRF-based disparity estimation method with additional pre- and post-processing steps. The proposed method uses rectification and colour segmentation to solve significant problems in disparity estimation as well as decomposition of the scene in foreground and background depth maps. Tests with different stereo sequences have already produced very promising results.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130925429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Multisensory integration of a sound with stereo 3-D visual events 声音与立体三维视觉事件的多感官整合
K. Sakurai, P. M. Grove
{"title":"Multisensory integration of a sound with stereo 3-D visual events","authors":"K. Sakurai, P. M. Grove","doi":"10.1109/3DTV.2009.5069684","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069684","url":null,"abstract":"The stream/bounce effect is an example of audio/visual interaction in which two identical luminance-defined targets in a 2-D display move toward one another from opposite sides of a display, coincide, and continue past one another along collinear trajectories. The targets can be perceived to either stream past or bounce off of one another. Streaming is the dominant perception in visual only displays while bouncing predominates when an auditory transient tone is presented at the point of coincidence. We extended previous findings on audio/visual interactions, using 3-D displays, and found the following two points. First, the sound-induced bias towards bouncing persists in spite of the introduction of spatial offsets in depth between the trajectories, which reduce the probability of motion reversals. Second, audio/visual interactions are similar for luminance-defined and disparity-defined displays, indicating that audio/visual interaction occurs at or beyond the visual processing stage where disparity-defined form is recovered.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122904215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Wave Field Synthesis 波场合成
K. Brandenburg, Sandra Brix, T. Sporer
{"title":"Wave Field Synthesis","authors":"K. Brandenburg, Sandra Brix, T. Sporer","doi":"10.1109/3DTV.2009.5069680","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069680","url":null,"abstract":"The paper introduces Wave Field Synthesis (WFS). The technology was created at Delft University of Technology 20 years ago. Since then it has been refined to deliver true immersive sound. Application areas include cinema (with a priority on the combination with 3D video), theme parks, VR installations and, in the long run, home theatres. The paper introduces the basic technology and then focuses on applications of WFS and special requirements.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128316341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Quality improving techniques in DIBR for free-viewpoint video 质量提高技术在DIBR为自由视点视频
L. Do, S. Zinger, Y. Morvan, P. D. With
{"title":"Quality improving techniques in DIBR for free-viewpoint video","authors":"L. Do, S. Zinger, Y. Morvan, P. D. With","doi":"10.1109/3DTV.2009.5069627","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069627","url":null,"abstract":"This paper evaluates our 3D view interpolation rendering algorithm and proposes a few performance improving techniques. We aim at developing a rendering method for free-viewpoint 3DTV, based on depth image warping from surrounding cameras. The key feature of our approach is warping texture and depth in the first stage simultaneously and postpone blending the new view to a later stage, thereby avoiding errors in the virtual depth map. We evaluate the rendering quality in two ways. Firstly, it is measured by varying the distance between the two nearest cameras. We have obtained a PSNR gain of 3 dB and 4.5 dB for the ‘Breakdancers’ and ‘Ballet’ sequences, respectively, compared to the performance of a recent algorithm. A second series of tests in measuring the rendering quality were performed using compressed video or images from surrounding cameras. The overall quality of the system is dominated by rendering quality and not by coding.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126072863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
A 3D avatar modeling of realworld objects using a depth camera 使用深度相机的现实世界物体的3D化身建模
Ji-Ho Cho, Hyun Soo Kim, Kwan H. Lee
{"title":"A 3D avatar modeling of realworld objects using a depth camera","authors":"Ji-Ho Cho, Hyun Soo Kim, Kwan H. Lee","doi":"10.1109/3DTV.2009.5069655","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069655","url":null,"abstract":"In this paper, we propose a novel 3D avatar generation scheme using a depth camera. Depth camera can capture both visual and depth information of moving objects by using infrared light source at video frame rate. Our method consists of two main steps: alpha matting and mesh generation. We present a novel alpha matting algorithm that combines visual and range information. It improves an existing natural alpha matting methods. After alpha matting is performed a triangular mesh is created by using RGB, depth, and alpha images. Our method can represent any real world objects including a furry one. Experimental results show that ourmatting method demonstrates better results than the previous approaches. Especially our method provides a viable solution to model a scene with fuzzy objects.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121777216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Registration of depth and video data in depth image based rendering 基于深度图像绘制的深度与视频数据配准
M. Fieseler, Xiaoyi Jiang
{"title":"Registration of depth and video data in depth image based rendering","authors":"M. Fieseler, Xiaoyi Jiang","doi":"10.1109/3DTV.2009.5069677","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069677","url":null,"abstract":"Depth image based rendering (DIBR) has been proposed to create content for 3D-TV. In DIBR stereoscopic images are created from monoscopic images and associated depth data. Techniques deducing depth information from available video content have been applied to process video data lacking associated depth data for DIBR. Yet, artificial as well as recorded depth data may contain misalignments with respect to the video data. Misaligned depth data is a source of artifacts observable in the rendered 3D view. We show that by using an edge based registration method the spatial alignment of depth and video data can be improved, leading to an alleviation of the observed artifacts.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117184738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
iGLANCE: Transmission to medical high definition autostereoscopic displays 传输到医疗高清自动立体显示器
D. Ruijters, S. Zinger
{"title":"iGLANCE: Transmission to medical high definition autostereoscopic displays","authors":"D. Ruijters, S. Zinger","doi":"10.1109/3DTV.2009.5069626","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069626","url":null,"abstract":"The healthcare branch of the iGLANCE project aims at making high quality high definition autostereoscopic displays available in the clinical operating room. Displaying medical images on an autostereoscopic display poses different requirements than consumer usage would. For entertainment it is sufficient when the perceived image is convincing, even when deviating from the actual imaged scene. For medical usage it is essential that the perceived image represents the actual clinical data. The challenge that the iGLANCE project intends to address is the transmission of the autostereoscopic data through a bandwidth limited channel, while maintaining an image that does not contain significant image artifacts, like e.g. visible disocclusions.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121825969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Experimental investigation of holographic 3D-TV approach 全息3d电视方法的实验研究
M. Agour, T. Kreis
{"title":"Experimental investigation of holographic 3D-TV approach","authors":"M. Agour, T. Kreis","doi":"10.1109/3DTV.2009.5069652","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069652","url":null,"abstract":"A digital hologram is recorded by a 2D CCD array by superposition of the wavefield reflected or scattered from a scene and a coherent reference wave. If the recorded digital hologram is fed to a spatial light modulator (SLM) and this is illuminated by the reference wave, then the whole original wavefield can be reconstructed. The reconstructed wavefield contains phase and intensity distributions, which means it is full 3D, exhibiting such effects as depth and parallax. Therefore, the concept of digital holography is a promising approach to 3D-TV. In one of our previous works the preliminaries of an all-digital-holographic approach to 3D-TV were given. Here one of our approaches is experimentally verified and its capabilities and limitations are investigated.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"267 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121993181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
An efficient 2D to 3D video conversion method based on skeleton line tracking 一种基于骨架线跟踪的二维到三维视频转换方法
Zheng Li, Xudong Xie, Xiaodong Liu
{"title":"An efficient 2D to 3D video conversion method based on skeleton line tracking","authors":"Zheng Li, Xudong Xie, Xiaodong Liu","doi":"10.1109/3DTV.2009.5069622","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069622","url":null,"abstract":"3DTV is becoming more and more popular but the 3D sequences are still very few. In this paper, we proposed an efficient 2D to 3D video conversion method based on skeleton line tracking. In our method, for a key frame, the foreground object and the corresponding depth image can be obtained by interactive method. For a non-key frame, we first generate the skeleton lines of the object in the previous frame and predict them in the current frame, then recover the object by the Lazy Snapping method. A robust and fast optical flow method is introduced to make the prediction better. At last, the depth image is generated to composite the stereo image. Because only the skeleton lines of the object rather than the whole object are tracked, the computational complexity is much lower than other tracking methods. The experimental results show that our method is feasible.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114158388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信