2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)最新文献

筛选
英文 中文
Light field denoising: exploiting the redundancy of an epipolar sequence representation 光场去噪:利用极序列表示的冗余
Alireza Sepas-Moghaddam, P. Correia, F. Pereira
{"title":"Light field denoising: exploiting the redundancy of an epipolar sequence representation","authors":"Alireza Sepas-Moghaddam, P. Correia, F. Pereira","doi":"10.1109/3DTV.2016.7548963","DOIUrl":"https://doi.org/10.1109/3DTV.2016.7548963","url":null,"abstract":"Many current light field cameras are based on a single sensor with an overlaid micro lenses array, making them more susceptible to noise. This paper proposes a novel light field denoising solution to reduce the effect of Gaussian noise as this is the most commonly assumed type of noise at the acquisition process. The proposed solution takes a noisy light field image and converts it to a sequence of epipolar images, using as intermediate step a representation based on the ordered sequence of the sub-aperture images. The created epipolar sequence is finally processed by a powerful, generic video denoising engine. The performance of the proposed denoising solution has been assessed using the PSNR and SSIM metrics for a representative set of rendered 2D views. The obtained results for two representative datasets compare favorably against state-of-the-art light field denoising methods, both in terms of objective assessment and visual appearance.","PeriodicalId":378956,"journal":{"name":"2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122351699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Fusion of pose and head tracking data for immersive mixed-reality application development 融合姿态和头部跟踪数据,用于沉浸式混合现实应用开发
Katarzyna Czesak, R. Mohedano, P. Carballeira, J. Cabrera, N. García
{"title":"Fusion of pose and head tracking data for immersive mixed-reality application development","authors":"Katarzyna Czesak, R. Mohedano, P. Carballeira, J. Cabrera, N. García","doi":"10.1109/3DTV.2016.7548886","DOIUrl":"https://doi.org/10.1109/3DTV.2016.7548886","url":null,"abstract":"This work addresses the creation of a development framework where application developers can create, in a natural way, immersive physical activities where users experience a 3D first-person perception of full body control. The proposed frame-work is based on commercial motion sensors and a Head-Mounted Display (HMD), and a uses Unity 3D as a unifying environment where user pose, virtual scene and immersive visualization functions are coordinated. Our proposal is exemplified by the development of a toy application showing its practical use.","PeriodicalId":378956,"journal":{"name":"2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114905298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A novel image quality index for stereo image 一种新的立体图像质量指标
Jian Ma, P. An, Zhixiang You, Liquan Shen
{"title":"A novel image quality index for stereo image","authors":"Jian Ma, P. An, Zhixiang You, Liquan Shen","doi":"10.1109/3DTV.2016.7548959","DOIUrl":"https://doi.org/10.1109/3DTV.2016.7548959","url":null,"abstract":"Stereo image quality assessment is provided as computa-tional methods to automatically assess the quality of stereo im-ages in perceptually consistent manner. In this paper, a novel full reference stereo image quality index with highlighting double-channel of binocular combination is introduced to assess stereo image quality. First, considering the visual sensitivity of stimu-lus at different spatial frequencies is different, both reference and distortion stereo pairs are filtered by a CSF (contrast sensitivity function). Second, in order to mimic the binocular fusion me-chanism of HVS (human visual system), we extract the binocular energy response of both reference and distortion stereo pairs based on a magnitude response of Gabor filtering measure. Third, since the HVS is enormously complex, especially the binocular rivalry which caused by the asymmetric distortions of stereo pair, we propose to fix the issue of dominant view by using a block-based contrast measure. Finally, a single estimate of overall per-ceived quality of stereo images is obtained by the qualities pool-ing of double-channel. Experimental results show that the pro-posed metric achieves significantly higher consistency with sub-jective scores.","PeriodicalId":378956,"journal":{"name":"2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114204633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved 2D-to-3D video conversion by fusing optical flow analysis and scene depth learning 融合光流分析和场景深度学习,改进二维到三维视频转换
José L. Herrera, Carlos R. del-Blanco, N. García
{"title":"Improved 2D-to-3D video conversion by fusing optical flow analysis and scene depth learning","authors":"José L. Herrera, Carlos R. del-Blanco, N. García","doi":"10.1109/3DTV.2016.7548954","DOIUrl":"https://doi.org/10.1109/3DTV.2016.7548954","url":null,"abstract":"Automatic 2D-to-3D conversion aims to reduce the existing gap between the scarce 3D content and the incremental amount of displays that can reproduce this 3D content. Here, we present an automatic 2D-to-3D conversion algorithm that extends the functionality of the most of the existing machine learning based conversion approaches to deal with moving objects in the scene, and not only with static backgrounds. Under the assumption that images with a high similarity in color have likely a similar 3D structure, the depth of a query video sequence is inferred from a color + depth training database. First, a depth estimation for the background of each image of the query video is computed adaptively by combining the depths of the most similar images to the query ones. Then, the use of optical flow enhances the depth estimation of the different moving objects in the foreground. Promising results have been obtained in a public and widely used database.","PeriodicalId":378956,"journal":{"name":"2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127029582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Illumination change robust, codec independent lowbit rate coding of stereo from singleview aerial video 照明变化鲁棒,编解码器独立的低比特率编码立体声从单视图航拍视频
H. Meuel, Florian Kluger, J. Ostermann
{"title":"Illumination change robust, codec independent lowbit rate coding of stereo from singleview aerial video","authors":"H. Meuel, Florian Kluger, J. Ostermann","doi":"10.1109/3DTV.2016.7548961","DOIUrl":"https://doi.org/10.1109/3DTV.2016.7548961","url":null,"abstract":"Low bit rate transmission of HD video captured from UAVs is highly interesting. Assuming a planar surface, areas contained in the current frame but not in the previous frames (New Area) can be reconstructed using Global Motion Compensation (GMC). Aiming at stereo reconstruction from monocular video by using motion parallax, a second view of each image pixel has to be additionally transmitted. Whereas the bit rate can be considerably reduced compared to standardized video coding to about 1-2 Mbit/s, artifacts at the boundaries between new areas and GMC reconstructed areas may occur, e. g. due to illumination changes. We propose a gradient correction of the new areas to adjust the luminance. Furthermore, we utilize a general ROI coding framework to become independent of any encoder modifications. We achieve a subjectively higher video quality while saving 2% BD-rate compared to a specifically adapted encoder by exploiting latest encoder optimizations of x265.","PeriodicalId":378956,"journal":{"name":"2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128242039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Perceptual oriented depth cue enhancement for stereoscopic view synthesis 立体视图合成的感知导向深度线索增强
Yi-Chun Chen, Tian-Sheuan Chang
{"title":"Perceptual oriented depth cue enhancement for stereoscopic view synthesis","authors":"Yi-Chun Chen, Tian-Sheuan Chang","doi":"10.1109/3DTV.2016.7548884","DOIUrl":"https://doi.org/10.1109/3DTV.2016.7548884","url":null,"abstract":"The depth cue enhancement can provide better stereo perception for viewers. However, the previous direct depth remapping approach did not consider the perceptual factor of human eyes; which could easily result in viewing discomfort. This paper presents a perceptual oriented depth cue enhancement with nonlinear disparity mapping. This mapping increases the depth resolution of viewer focused depth range (for better stereo perception) and more importantly, it limits its range within the stereoscopic comfortable zone. The above mapping strategy is modeled on normal distribution for easy, and user-specific adjustment. The experimental results show better stereoscopic viewing experiences when compared with results by the original disparity map.","PeriodicalId":378956,"journal":{"name":"2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121144559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ghosting and popping detection for image-based rendering 基于图像渲染的重影和弹出检测
S. Guthe, Pascal Schardt, M. Goesele, D. Cunningham
{"title":"Ghosting and popping detection for image-based rendering","authors":"S. Guthe, Pascal Schardt, M. Goesele, D. Cunningham","doi":"10.1109/3DTV.2016.7548891","DOIUrl":"https://doi.org/10.1109/3DTV.2016.7548891","url":null,"abstract":"Film sequences generated using image-based rendering techniques are commonly used in broadcasting, especially for sporting events. In many cases, however, image-based rending sequences contain artifacts, and these must be manually located. Here, we propose an algorithm to automatically detect not only the presence of the two most disturbing classes of artifact (popping and ghosting), but also the strength of each instance of an artifact. A simple perceptual evaluation of the technique shows that it performs well.","PeriodicalId":378956,"journal":{"name":"2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133227679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Method for detecting interest points in images using angular signatures 使用角特征检测图像中兴趣点的方法
David Grogna, M. Boutaayamou, J. Verly
{"title":"Method for detecting interest points in images using angular signatures","authors":"David Grogna, M. Boutaayamou, J. Verly","doi":"10.1109/3DTV.2016.7548890","DOIUrl":"https://doi.org/10.1109/3DTV.2016.7548890","url":null,"abstract":"We present an innovative method for detecting interest points (IPs) in grayscale and color images. It is based on the use of angular signatures (ASs), produced by spinning, at each pixel in the image, an \"x-tapered, y-derivative, half-Gaussian kernel\" in discrete angular steps. By exploiting the AS(s) produced at each pixel, it automatically \"classifies\" the pixel as being an IP or not. We present preliminary results on synthetic grayscale and real color 2D images, and these confirm the potential value of the method. It can easily be extended from grayscale and color images to images with any number of components, as well as to 3D volumetric images and images on grids of higher dimensionality. It is useful for stereo matching and video tracking.","PeriodicalId":378956,"journal":{"name":"2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133993442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-view wide baseline depth estimation robust to sparse input sampling 多视图宽基线深度估计对稀疏输入采样具有鲁棒性
Lode Jorissen, Patrik Goorts, G. Lafruit, P. Bekaert
{"title":"Multi-view wide baseline depth estimation robust to sparse input sampling","authors":"Lode Jorissen, Patrik Goorts, G. Lafruit, P. Bekaert","doi":"10.1109/3DTV.2016.7548956","DOIUrl":"https://doi.org/10.1109/3DTV.2016.7548956","url":null,"abstract":"In this paper, we propose a depth map estimation algorithm, based on Epipolar Plane Image (EPI) line extraction, that is able to correctly handle partially occluded objects in wide baseline camera setups. Furthermore, we introduce a descriptor matching technique to reduce the negative influence of inaccurate color correction and similarly textured objects on the depth maps. A visual comparison between an existing EPI-line extraction algorithm and our method is provided, showing that our method provides more accurate and consistent depth maps in most cases.","PeriodicalId":378956,"journal":{"name":"2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130155287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Adaptive colorization-based compression for stereoscopic images 基于自适应着色的立体图像压缩
Samira Ouddane, K. Faraoun, Sid Ahmed Fezza, M. Larabi
{"title":"Adaptive colorization-based compression for stereoscopic images","authors":"Samira Ouddane, K. Faraoun, Sid Ahmed Fezza, M. Larabi","doi":"10.1109/3DTV.2016.7548962","DOIUrl":"https://doi.org/10.1109/3DTV.2016.7548962","url":null,"abstract":"Considerable research efforts have been devoted to stereo image compression. However, most of them focused on the luminance and ignored the chromatic information. Consequently, in this paper we propose a stereo color image coding method based on colorization. The main idea is to compress one view of the stereo pair using a standard coding method, while for the other view, only the luminance component is considered for compression. The chromatic information of this latter view are transmitted to the decoder for a few representative pixels (RPs) only. These RPs are defined using a novel proposed RP extraction method based on skeletonization. At decoder side, color values of all the remaining pixels are reconstructed by colorization methods. Experimental results showed that our coding method can achieve considerable bit-rate saving compared to conventional coding method.","PeriodicalId":378956,"journal":{"name":"2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116323689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书