Depth map generation for 2D-to-3D conversion by limited user inputs and depth propagation

Xi Yan, You Yang, Guihua Er, Qionghai Dai
{"title":"Depth map generation for 2D-to-3D conversion by limited user inputs and depth propagation","authors":"Xi Yan, You Yang, Guihua Er, Qionghai Dai","doi":"10.1109/3DTV.2011.5877167","DOIUrl":null,"url":null,"abstract":"The quality of depth maps is crucial for 2D to 3D conversion, but high quality depth map generation methods are usually very time consuming. We propose an efficient semi-automatic depth map generation scheme based on limited user inputs and depth propagation. First, the original image is over-segmented. Then, the depth values of selected pixels and the approximate locations of T-junctions are specified by user inputs. The final depth map is obtained by depth propagation combining user inputs, color and edge information. The experimental results demonstrate that our scheme is satisfactory in terms of both accuracy and efficiency, and thus can be applied for high quality 2D to 3D video conversion.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"26","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DTV.2011.5877167","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 26

Abstract

The quality of depth maps is crucial for 2D to 3D conversion, but high quality depth map generation methods are usually very time consuming. We propose an efficient semi-automatic depth map generation scheme based on limited user inputs and depth propagation. First, the original image is over-segmented. Then, the depth values of selected pixels and the approximate locations of T-junctions are specified by user inputs. The final depth map is obtained by depth propagation combining user inputs, color and edge information. The experimental results demonstrate that our scheme is satisfactory in terms of both accuracy and efficiency, and thus can be applied for high quality 2D to 3D video conversion.
通过有限的用户输入和深度传播生成2d到3d转换的深度图
深度图的质量对于2D到3D的转换至关重要,但高质量的深度图生成方法通常非常耗时。提出了一种基于有限用户输入和深度传播的高效半自动深度图生成方案。首先,原始图像被过度分割。然后,用户输入指定所选像素的深度值和t形结点的近似位置。结合用户输入、颜色和边缘信息,通过深度传播得到最终的深度图。实验结果表明,该方案在精度和效率方面都令人满意,可用于高质量的2D到3D视频转换。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信