2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)最新文献

筛选
英文 中文
Hardware implementation of an omnidirectional camerawith real-time 3D imaging capability 具有实时三维成像能力的全向相机的硬件实现
Hossein Afshari, L. Jacques, Luigi Bagnato, A. Schmid, P. Vandergheynst, Y. Leblebici
{"title":"Hardware implementation of an omnidirectional camerawith real-time 3D imaging capability","authors":"Hossein Afshari, L. Jacques, Luigi Bagnato, A. Schmid, P. Vandergheynst, Y. Leblebici","doi":"10.1109/3DTV.2011.5877192","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877192","url":null,"abstract":"A novel hardware implementation of an omnidirectional image sensor is presented which is capable of acquiring and processing 3D image sequences in real time. The system consists of a hemispherical arrangement of a large number of CMOS imagers, connecting to a layered arrangement of a high-end FPGA platform that is responsible data framing and image processing. The hardware platform in charge of real-time processing the 3.8 Gb/s data which is generated by the cameras is presented, and a first application of the system consisting of omnidirectional image acquisition is demonstrated.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133562027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Bandwidth-efficient user dependent transmission for multi-view video 多视点视频的带宽高效用户依赖传输
Ziyuan Pan, Yoshihisa Ikuta, M. Bandai, Takashi Watanabe
{"title":"Bandwidth-efficient user dependent transmission for multi-view video","authors":"Ziyuan Pan, Yoshihisa Ikuta, M. Bandai, Takashi Watanabe","doi":"10.1109/3DTV.2011.5877159","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877159","url":null,"abstract":"Multi-view video allows user to watch 3D video and select desired viewpoint. In order to reduce the switching delay, many researches tend to transmit all the views, which brings much more increment in the bandwidth requirement. And the traffic increase as the increase of the number of the views in the multi-view video. In order to overcome these problems, we have proposed a user dependent scheme called UDMVT for the transmission of multi-view video in [1]. And we further improved this scheme to support the 3D multi-view video in the [2]. The evaluation results proved that UDMVT is efficient to reduce the bit-rate of the transmission of the multi-view video in the successive motion model especially when the number of view is larger. In this paper, we will simply introduce the UDMVT and discuss the application of the UDMVT in the Free Viewpoint TV.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"38 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116165319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Stereo video encoder optimization for mobile applications 立体声视频编码器优化移动应用程序
Philipp Merkte, Jordi Bayo Singla, K. Muller, T. Wiegand
{"title":"Stereo video encoder optimization for mobile applications","authors":"Philipp Merkte, Jordi Bayo Singla, K. Muller, T. Wiegand","doi":"10.1109/3DTV.2011.5877217","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877217","url":null,"abstract":"This paper presents a stereo video encoder optimization for mobile applications. While video coding applications mostly concentrate on finding a good trade-off between rate and distortion, the additional constraint of limited processing power has to be considered for mobile applications. Realizing mobile video coding applications for stereo instead of 2D video is challenging, as twice the amount of video data has to be processed. Therefore, we investigate how the trade-off between rate, distortion, and complexity can be optimized for the multiview video coding (MVC) extension of the H.264/AVC standard. In the paper, we focus on the encoder, as its complexity is much higher and more configuration dependent than at the decoder. By enabling and disabling certain tools and setting different parameter values, the encoder complexity can be adapted for mobile applications. The presented results show that an optimized MVC encoder configuration performs significantly faster without impairing the rate-distortion performance.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122707765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Fast all in-focus light field rendering using dynamic block-based focusing technique 快速所有聚焦光场渲染使用动态块为基础的聚焦技术
Young-Sun Jeon, Hyunwook Park
{"title":"Fast all in-focus light field rendering using dynamic block-based focusing technique","authors":"Young-Sun Jeon, Hyunwook Park","doi":"10.1109/3DTV.2011.5877162","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877162","url":null,"abstract":"This paper presents a new block-based focusing technique for all in-focus light field rendering (LFR). In the proposed method, at first, synthesized images with different depths for a desired view point are obtained by the conventional light field rendering (LFR) method. The synthesized images contain both low and high frequency component artifacts in out-of-focused regions. After that, the proposed dynamic block-based focusing technique is applied to these synthesized images in block by block. Using this process, all in-focus images can be reconstructed. Experimental results show both objective and subjective evaluations compared to the previous works. The proposed method saves processing time and the rendered images have an acceptable quality.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"22 6S 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122815479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Temporal and inter-view skip modes for multi-view video coding 多视点视频编码的时间和跨视点跳过模式
Jin Young Lee, Hochen Wey, Du-sik Park, Chang-Yeong Kim
{"title":"Temporal and inter-view skip modes for multi-view video coding","authors":"Jin Young Lee, Hochen Wey, Du-sik Park, Chang-Yeong Kim","doi":"10.1109/3DTV.2011.5877166","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877166","url":null,"abstract":"The multi-view video coding (MVC) standard includes all the highly advanced techniques of H.264/AVC. Especially, the techniques employed in motion estimation are extended to reduce not only temporal but inter-view redundancies in MVC, but motion vector prediction of a skip mode is only performed in a temporal direction. Therefore, we introduce a temporal skip mode using reference pictures at the same view as the current picture and an inter-view skip mode employing reference pictures at the different views from the current picture. The proposed method decides the best one of the temporal and the inter-view skip modes, based on rate-distortion (RD) optimization. Experimental results illustrate that the proposed method is significantly more efficient than the conventional method.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126155418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Computer generated holography for computer graphics 用于计算机图形学的计算机生成全息图
P. Lobaz
{"title":"Computer generated holography for computer graphics","authors":"P. Lobaz","doi":"10.1109/3DTV.2011.5877153","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877153","url":null,"abstract":"3D displays are now rapidly designed, enhanced and sold. It is expected that after a boom of displays based on active glasses, new display generations will appear. Those would be probably displays with passive glasses, multiview autostereoscopic displays based on parallax barrier or lenticular technology, and displays based on integral photography principles. Each of these types would be outperformed by holographic displays once they eventually mature. Holography is a concept known for a long time to opticians; however, for the computer graphics community it is a new undiscovered world. The tutorial provides basic principles of classical holography for people used to think in terms of rays instead of wave optics, and for those that have little experience with modern optics. These principles are then exploited in a talk about digital aspects of holography and about making holographic renderers. Every participant of the tutorial should be then able to make his or her own basic hologram renderer, to understand principles of advanced ones, and to build a very low budget holographic laboratory.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130008056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Laser-based multi-user multi-modal 3D displays 基于激光的多用户多模态3D显示
P. Surman
{"title":"Laser-based multi-user multi-modal 3D displays","authors":"P. Surman","doi":"10.1109/3DTV.2011.5877152","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877152","url":null,"abstract":"In HELIUM3D a glasses-free (autostereoscopic) display is under development by an eight member consortium with funding from the European Union. The project commenced in 2008 and will finish in June 2011. This display is designed to supply 3D to several users by employing the principle of head position tracking so that the positions where stereoscopic image pairs can be observed, referred to as exit pupils, can be directed independently to several viewers' eyes. In order to obtain the necessary control over the light exiting the screen a red, green and blue laser source is employed. Laser illumination is not used for its high coherence properties as in holographic displays but for its low étendue; that is a measure how spread out the light is in area and angle.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123763203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Virtual out of focus with single image to enhance 3D perception 单图像虚拟失焦,增强3D感知
Heechul Han, Jingu Jeong, Emi Arai
{"title":"Virtual out of focus with single image to enhance 3D perception","authors":"Heechul Han, Jingu Jeong, Emi Arai","doi":"10.1109/3DTV.2011.5877188","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877188","url":null,"abstract":"The color, contrast and detail of an object in focus are accentuated, whereas those of a background are attenuated based on the estimated depth map with a face detection and segmentation method for enhancing a 3D perception. Considering the human perception and a real out of focused image taken with wide aperture lens, we suggest a boundary gradation for the estimated depth map to handle blurring errors. To make a blurred background, we perform a modified Gaussian pyramid by scaling up and blending all of images.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122352927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Depth video camera based temporal alpha matting for natural 3D scene generation 基于深度视频摄像机的时间alpha抠图自然3D场景生成
Ji-Ho Cho, T. Yamasaki, K. Aizawa, Kwan H. Lee
{"title":"Depth video camera based temporal alpha matting for natural 3D scene generation","authors":"Ji-Ho Cho, T. Yamasaki, K. Aizawa, Kwan H. Lee","doi":"10.1109/3DTV.2011.5877164","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877164","url":null,"abstract":"We present a new method for fully automated video matting. This method uses depth information acquired by a depth camera to automatically compute trimaps. Trimaps segment an image into three nonoverlapping regions (foreground, background, and unknown) and generation of a highly accurate trimap is one of the most important tasks in natural alpha matting. We propose an adaptive approach to generate unknown regions according to the fuzziness of the foreground object. Fuzzy regions have wide unknown regions, whereas areas that contain sharp edges have narrow unknown areas. We further extend the standard closed-form matting method to optimize both spatial and temporal domains. Our results show the proposed method significantly reduces flickering artifacts and generates natural 3D scenes.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134378215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Design and implementation of high-performance video processor for head-mounted displays 头戴式显示器用高性能视频处理器的设计与实现
Hou Zuoxun, Ge Chenyang, Zhao Wenzhe, Li Longjun, Zheng Nanning
{"title":"Design and implementation of high-performance video processor for head-mounted displays","authors":"Hou Zuoxun, Ge Chenyang, Zhao Wenzhe, Li Longjun, Zheng Nanning","doi":"10.1109/3DTV.2011.5877154","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877154","url":null,"abstract":"This paper presents the design of a high-performance processor used for head-mounted display (HMD) applications targeting stereo video processing. The proposed hardware architecture of the video processor consists of three major parts: an adaptive 3-dimensional (3D) video decoder to accurately decoding the stereo composite video base band signal (CVBS) source, a video source separation module to generate the 3D display effects while maintaining the original field frequencies on both output channels, and a image post-processing module to enhance the display quality. Furthermore, the paper discusses the key design issue on compact hardware structure for SDRAM access, which is ultimately achieved in a single general SDRAM by data clustering and integration. Both FPGA and ASIC implementations are carried out and the results carefully compared showing that the designed video processor for 3D display could produce well immersing feeling with limited costs in effective decreasing the noise, flicker and crosstalk.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132077648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信