2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)最新文献

筛选
英文 中文
Improving depth discontinuities for depth-based 3DTV production 改善基于深度的3DTV制作的深度不连续
A. Frick, R. Koch
{"title":"Improving depth discontinuities for depth-based 3DTV production","authors":"A. Frick, R. Koch","doi":"10.1109/3DTV.2011.5877228","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877228","url":null,"abstract":"We introduce a scheme for the production of LDV (Layered Depth Video) content, based on capturing from a hybrid camera system, consisting of two time of flight and five CCD cameras and introduce a post production step, based on Grab - Cut segmentation, which significantly improves the quality of the end result. The proposed improvement step can be done fully automatically, based on initialization through estimated depth images.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"62 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132148289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Bit-rate allocation for multi-view video plus depth 多视点视频加上深度的比特率分配
Emilie Bosc, V. Jantet, M. Pressigout, L. Morin, C. Guillemot
{"title":"Bit-rate allocation for multi-view video plus depth","authors":"Emilie Bosc, V. Jantet, M. Pressigout, L. Morin, C. Guillemot","doi":"10.1109/3DTV.2011.5877168","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877168","url":null,"abstract":"The efficient compression of multi-view-video-plus-depth (MVD) data raises the bit-rate allocation issue for the compression of texture and depth data. This question has not been solved yet because not all surveys reckon on a shared framework. This paper studies the impact of bit-rate allocation for texture and depth data relying on the quality of an intermediate synthesized view. The results show that depending on the acquisition configuration, the synthesized views require a different ratio between the depth and texture bit-rate: between 40% and 60% of the total bit-rate should be allocated to depth.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134121728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Real-time vanishing point detection using the Local Dominant Orientation Signature 使用局部优势方向签名的实时消失点检测
Jiwon Choi, Wonjun Kim, Haejung Kong, Changick Kim
{"title":"Real-time vanishing point detection using the Local Dominant Orientation Signature","authors":"Jiwon Choi, Wonjun Kim, Haejung Kong, Changick Kim","doi":"10.1109/3DTV.2011.5877194","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877194","url":null,"abstract":"The vanishing point can be defined as a point generated by converged perspective lines, which are parallel in the real world. We propose a novel algorithm to detect the vanishing point in various images in real-time. The proposed algorithm unfolds into three steps. In the first step, we introduce the Local Dominant Orientation Signature (LDOS) descriptor to extract structural feature of an image. Then, we detect vanishing point candidates using dynamic programming. Finally, we estimate the location of the vanishing point from detected vanishing point candidates. Unlike the previous methods, the proposed method, which uses the dominant orientation of the local image structure, is fast and not limited to specific image contents. Experiments are performed on diverse images to confirm the efficiency of the proposed method and to show that it can be employed in various real-time applications.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131379313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Extracting embedded data in 3D models from 2D views using perspective invariants 使用透视不变量从2D视图中提取3D模型中的嵌入数据
Yagiz Yasaroglu, A. Aydin Alatan
{"title":"Extracting embedded data in 3D models from 2D views using perspective invariants","authors":"Yagiz Yasaroglu, A. Aydin Alatan","doi":"10.1109/3DTV.2011.5877231","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877231","url":null,"abstract":"A 3D-2D watermarking method using perspective projective invariance is proposed. Data is embedded in relative positions of six points on a 3D mesh by translating one of them, and extracted from any 2D view generated as long as the points remain visible. To evaluate the performance of the perspective invariant, a watermarking system with a very simple interest point detection method is implemented. Simulations are made on six 3D meshes with different watermark strengths and view angles. Very promising results show that the used perspective invariant is suitable for 3D-2D watermarking, opening a completely new area of research.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116282862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
DIBR based view synthesis for free-viewpoint television 基于DIBR的自由视点电视视图合成
Xiaohui Yang, Ju Liu, Jiande Sun, Xinchao Li, Wei Liu, Yuling Gao
{"title":"DIBR based view synthesis for free-viewpoint television","authors":"Xiaohui Yang, Ju Liu, Jiande Sun, Xinchao Li, Wei Liu, Yuling Gao","doi":"10.1109/3DTV.2011.5877165","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877165","url":null,"abstract":"We propose an effective virtual view synthesis approach, which utilizes the technology of depth-image-based rendering (DIBR). In our scheme, two reference color images and their associated depth maps are used to generate the arbitrary virtual viewpoint. Firstly, the main and auxiliary viewpoint images are warped to the virtual viewpoint. After that, the cracks and error points are removed to enhance the image quality. Then, we complement the disocclusions of the virtual viewpoint image warped from the main viewpoint with the help of the auxiliary viewpoint. In order to reduce the color incontinuity of the virtual view, the brightness of the two reference viewpoint images are adjusted. Finally, the holes are filled by the depth-assistance asymmetric dilation inpainting method. Simulations show that the view synthesis approach is effective and reliable in both of subjective and objective evaluations.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123855190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Enhanced block prediction in stereoscopic video coding 立体视频编码中增强的块预测
Dong Li, Yongbing Zhang, Qiong Liu, Xiangyang Ji, Qionghai Dai
{"title":"Enhanced block prediction in stereoscopic video coding","authors":"Dong Li, Yongbing Zhang, Qiong Liu, Xiangyang Ji, Qionghai Dai","doi":"10.1109/3DTV.2011.5877163","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877163","url":null,"abstract":"The coding efficiency improvement of stereoscopic video system is a hot research topic in the 3D video coding areas. To better exploit the redundancy between the two channels of stereoscopic video, a stereoscopic video coding scheme for Audio Video Coding Standard (AVS) is proposed in the paper. The superior coding performance of the proposed scheme benefits from an enhanced block prediction algorithm, which includes an improvement of AVS direct mode and motion vector prediction, as well as adaptively combining the prediction results of motion compensation and disparity compensation. The experimental results demonstrate that the proposed scheme and algorithm greatly outperforms the reference software of AVS.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"301 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132403630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Temporal filtering for depth maps generated by Kinect depth camera 对 Kinect 深度摄像头生成的深度图进行时空过滤
S. Matyunin, D. Vatolin, Y. Berdnikov, M. Smirnov
{"title":"Temporal filtering for depth maps generated by Kinect depth camera","authors":"S. Matyunin, D. Vatolin, Y. Berdnikov, M. Smirnov","doi":"10.1109/3DTV.2011.5877202","DOIUrl":"https://doi.org/10.1109/3DTV.2011.5877202","url":null,"abstract":"We propose a method of filtering depth maps provided by Kinect depth camera. Filter uses output of the conventional Kinect camera along with the depth sensor to improve the temporal stability of the depth map and fill occlusion areas. To filter input depth map, the algorithm uses the information about motion and color of objects from the video. The proposed method can be applied as a preprocessing stage before using Kinect output data.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130363436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 174
A no-reference image quality evaluation based on power spectrum 基于功率谱的无参考图像质量评价
Yan Zhang, Ping An, Qiuwen Zhang, Liquan Shen, Zhaoyang Zhang
{"title":"A no-reference image quality evaluation based on power spectrum","authors":"Yan Zhang, Ping An, Qiuwen Zhang, Liquan Shen, Zhaoyang Zhang","doi":"10.1109/3dtv.2011.5877187","DOIUrl":"https://doi.org/10.1109/3dtv.2011.5877187","url":null,"abstract":"","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"249 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115005329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信