2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video最新文献

筛选
英文 中文
Automatic player's view generation of real soccer scenes based on trajectory tracking 基于轨迹跟踪的真实足球场景球员视角自动生成
N. Kasuya, I. Kitahara, Y. Kameda, Y. Ohta
{"title":"Automatic player's view generation of real soccer scenes based on trajectory tracking","authors":"N. Kasuya, I. Kitahara, Y. Kameda, Y. Ohta","doi":"10.1109/3DTV.2009.5069623","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069623","url":null,"abstract":"This paper proposes a method to generate a player's view video of an actual soccer match using a 3D free-viewpoint video technique. Users can enjoy 3D video simply by choosing a target player and concentrating on a soccer match from the player's view by manually controlling the viewing position. The generated video provides an immersive sight as if the user is running on the pitch. To generate the player views, the target player's 3D trajectory must be estimated. We developed a novel computer vision technique for player tracking that robustly works in an actual soccer stadium. In the current system, the orientation of the virtual camera for the player's view does not follow the gaze direction of each player because the image resolution is too poor to acquire gaze direction by computer vision. Users can choose a favorite orientation control method. We applied the proposed method to an actual soccer match held in an outdoor stadium to confirm its effectiveness.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126111769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Random Hole Display: A non-uniform barrier autostereoscopic display 随机孔显示:一种非均匀的屏障自立体显示
Andrew Nashel, H. Fuchs
{"title":"Random Hole Display: A non-uniform barrier autostereoscopic display","authors":"Andrew Nashel, H. Fuchs","doi":"10.1109/3DTV.2009.5069665","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069665","url":null,"abstract":"A novel design for an autostereoscopic (AS) display is demonstrated featuring a randomized hole distribution parallax barrier. The Random Hole Display (RHD) design eliminates the repeating zones found in regular barrier and lenticular autostereoscopic displays, enabling multiple simultaneous viewers in arbitrary locations. The primary task of a multi-user AS display is to deliver the correct and unique view to each eye of each observer. If multiple viewers see the same pixels behind the barrier, then a conflict occurs. Regular barrier displays have no conflicts between views for many viewer positions, but have significant, localized conflicts at regular intervals across the viewing area and when viewed at different distances from the display. By randomizing the barrier pattern the RHD exhibits a small amount of conflict between viewers, distributed across the display, in all situations. Yet it never exhibits the overwhelming conflicts between multiple views that are inherent in conventional AS displays. With knowledge of user locations, the RHD presents the proper stereoscopic view to one or more viewers. It further mitigates viewing conflicts by allowing display pixels that are seen by more than one viewer to remain active by optionally blending the similar colors of desired views. Interference between views for random hole barriers and for a conventional regular barrier pattern are simulated. Results from a proof-of-concept Random Hole Display are presented.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124660191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Real-time color holographic video display system 实时彩色全息视频显示系统
F. Yaras, Hoonjong Kang, L. Onural
{"title":"Real-time color holographic video display system","authors":"F. Yaras, Hoonjong Kang, L. Onural","doi":"10.1109/3DTV.2009.5069660","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069660","url":null,"abstract":"A real-time multi-GPU color holographic video display system computes holograms from 3D video of a rigid object. System has three main stages; client, server and optics. 3D coordinate and texture information are kept in client and sent online to the server through the network. In the server stage, with the help of the parallel processing ability of the GPUs and segmentation algorithms, phase-holograms are computed in real-time. The graphic card of the server computer drives the SLMs and red, green and blue channels are controlled in parallel. Resultant color holographic video is loaded to the SLMs which are illuminated by expanded light from LEDs. In the optics stage, reconstructed color components are combined by using beam splitters. Reconstructions are captured by a CCD array without any supporting optics. Experimental results are satisfactory.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128841161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Increased accuracy orientation estimation from omnidirectional images using the spherical Fourier transform 利用球面傅里叶变换提高了全向图像的方位估计精度
T. Schairer, B. Huhle, W. Straßer
{"title":"Increased accuracy orientation estimation from omnidirectional images using the spherical Fourier transform","authors":"T. Schairer, B. Huhle, W. Straßer","doi":"10.1109/3DTV.2009.5069674","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069674","url":null,"abstract":"Orientation estimation based on image data is a key technique in many applications and robust estimates are possible in case of omnidirectional images. A very efficient technique is to solve the problem in Fourier space. In this paper we present a fast and simple method to overcome one of the main draw-backs of this approach, namely the large quantization steps. Due to high memory demands, the Fourier-based solution can be computed on low-resolution input only and the resulting rotation estimate is given on an equiangular grid. We estimate the mode of the likelihood density based on the grid values in order to obtain a rotation estimate of increased accuracy. We show results on data captured with a spherical video camera and validate the approach comparing the orientation estimates of the real data to the ground-truth values.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124504563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Fast approximate focal stack transform 快速近似焦叠变换
J. G. Marichal-Hernández, J. P. Luke, F. Rosa, F. Pérez Nava, J. Rodríguez-Ramos
{"title":"Fast approximate focal stack transform","authors":"J. G. Marichal-Hernández, J. P. Luke, F. Rosa, F. Pérez Nava, J. Rodríguez-Ramos","doi":"10.1109/3DTV.2009.5069644","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069644","url":null,"abstract":"In this work we develop a new algorithm, extending the Fast Digital Radon transform from Götz and Druckmüller (1996), that is capable of generating the approximate focal stack of a scene, previously measured with a plenoptic camera, with the minimum number of operations. This new algorithm does not require multiplications, just sums, and its computational complexity is O(N4) to achieve a volume consisting of 2N − 1 photographic planes focused at different depths, from a N4 light field. The method is close to real-time performance, and its output can be used to estimate the distances to objects of a scene.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121908334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Fast gradient-based mesh generation method for the stereo image representation 基于梯度的立体图像快速网格生成方法
Ilkwon Park, H. Byun
{"title":"Fast gradient-based mesh generation method for the stereo image representation","authors":"Ilkwon Park, H. Byun","doi":"10.1109/3DTV.2009.5069672","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069672","url":null,"abstract":"This paper proposes a fast gradient-based mesh generation method for the stereo image representation. In our approach, right image in stereo image and intermediate views are fundamentally predicted and synthesized from left image by 2D image warping using regular mesh and vice versa. To overcome texture distortion on object boundaries and preserve texture smoothness of homogenous areas, we propose node selection on the strong edges in the gradient map. Furthermore, the selected nodes are evaluated by stereo matching error and validated by cross validation for nodal disparity. Each node point is iteratively moving along the pixels with high gradient value instead every point to find the optimal node position. Therefore, our approach provides a fast mesh optimization as well as a reliable image quality. The experimental results show that the proposed approach provides computational efficiency and high PSNR of prediction image compared to previous methods.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129473575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization and comparision of coding algorithms for mobile 3DTV 移动3DTV编码算法的优化与比较
G. Tech, A. Smolic, H. Brust, P. Merkle, K. Dix, Y. Wang, K. Muller, T. Wiegand
{"title":"Optimization and comparision of coding algorithms for mobile 3DTV","authors":"G. Tech, A. Smolic, H. Brust, P. Merkle, K. Dix, Y. Wang, K. Muller, T. Wiegand","doi":"10.1109/3DTV.2009.5069668","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069668","url":null,"abstract":"Different methods for coding of stereo video content for mobile 3DTV are examined and compared. These methods are H.264/MPEG-4 AVC simulcast transmission, H.264/MPEG-4 AVC Stereo SEI message, mixed resolution coding, and video plus depth coding using MPEG-C Part 3. The first two methods are based on a full left and right video (V+V) representation, the third method uses a full and a subsampled view and the fourth method is based on a one video plus associated depth (V+D) representation. Each method was optimized and tested using professional 3D video content. Subjective tests were carried out on a small size autostereoscopic display that is used in mobile devices. A comparison of the four methods at two different bitrates is presented. Results are provided in average subjective scoring, PSNR and VSSIM (Video Structure Similarity).","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129701485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Similarity measures for depth estimation 深度估计的相似度量
K. Wegner, O. Stankiewicz
{"title":"Similarity measures for depth estimation","authors":"K. Wegner, O. Stankiewicz","doi":"10.1109/3DTV.2009.5069670","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069670","url":null,"abstract":"This paper deals with similarity measures for stereoscopic depth estimation. These measures are used for matching of image pairs, which is the first step of the estimation process. We analyze influence of these similarity measures on performance of depth estimation with use of commonly known measures and compare the results with some novel proposals. The performance is judged by increase of quality of view synthesis, which is the main aim of this paper. Experimental results over a variety of moving material demonstrate that considerable gain can be attained without any modifications to estimation core and with tuning of matching stage only. Finally, some guidelines on design of well performing similarity measures are given. For the sake of paper, the whole work is described in context of belief-propagation algorithm, but the results and conclusions apply in general for many other state-of-the art optimization techniques.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129992753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Integration of 3D audio and 3D video for FTV 为FTV集成3D音频和3D视频
M. P. Tehrani, T. Yendo, T. Fujii, K. Takeda, K. Mase, M. Tanimoto
{"title":"Integration of 3D audio and 3D video for FTV","authors":"M. P. Tehrani, T. Yendo, T. Fujii, K. Takeda, K. Mase, M. Tanimoto","doi":"10.1109/3DTV.2009.5069681","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069681","url":null,"abstract":"We developed an FTV system to process, and display information of 3D scene in realtime, in which users freely control their own viewpoint/listening-point position. Free listening-point can be generated by either (i) ray-space representation of sound wave field (source sound independent), or (ii) by acoustic transfer function estimation (source sound dependent) and blind separation of sources of sounds. Free viewpoint generation is based on ray-space method, which is enhanced by using multipass dynamic programming. Integration is done by either (i) ray-space representation of sound wave and images together, or (ii) integrating each camera video signal and acoustic transfer function of the same location as integrated 3DAV data. The prototype system of integrated audio-visual viewer achieves both good image and sound qualities in realtime.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127745481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Vertex partitioning based Multiple Description Coding of 3D dynamic meshes 基于顶点划分的三维动态网格多重描述编码
M. Oguz Bici, N. Stefanoski, G. Akar
{"title":"Vertex partitioning based Multiple Description Coding of 3D dynamic meshes","authors":"M. Oguz Bici, N. Stefanoski, G. Akar","doi":"10.1109/3DTV.2009.5069641","DOIUrl":"https://doi.org/10.1109/3DTV.2009.5069641","url":null,"abstract":"In this paper, we propose a Multiple Description Coding (MDC) method for reliable transmission of compressed time consistent 3D dynamic meshes. It trades off reconstruction quality for error resilience to provide the best expected reconstruction of 3D mesh sequence at the decoder side. The method is based on partitioning the mesh vertices into two sets and encoding each set independently by a 3D dynamic mesh coder. The encoded independent bitstreams or socalled descriptions are transmitted independently. The 3D dynamic mesh coder is based on predictive coding with spatial and temporal layered decomposition. In addition, the proposed method allows for different redundancy allocations by duplicating a number of encoded spatial layers in both sets. The algorithm is evaluated with redundancy-rate-distortion curves and flexible trade-off between redundancy and side distortions can be achieved.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121169272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信