{"title":"Depth map generation for 2D-to-3D conversion by limited user inputs and depth propagation","authors":"Xi Yan, You Yang, Guihua Er, Qionghai Dai","doi":"10.1109/3DTV.2011.5877167","DOIUrl":null,"url":null,"abstract":"The quality of depth maps is crucial for 2D to 3D conversion, but high quality depth map generation methods are usually very time consuming. We propose an efficient semi-automatic depth map generation scheme based on limited user inputs and depth propagation. First, the original image is over-segmented. Then, the depth values of selected pixels and the approximate locations of T-junctions are specified by user inputs. The final depth map is obtained by depth propagation combining user inputs, color and edge information. The experimental results demonstrate that our scheme is satisfactory in terms of both accuracy and efficiency, and thus can be applied for high quality 2D to 3D video conversion.","PeriodicalId":158764,"journal":{"name":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"26","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DTV.2011.5877167","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 26
Abstract
The quality of depth maps is crucial for 2D to 3D conversion, but high quality depth map generation methods are usually very time consuming. We propose an efficient semi-automatic depth map generation scheme based on limited user inputs and depth propagation. First, the original image is over-segmented. Then, the depth values of selected pixels and the approximate locations of T-junctions are specified by user inputs. The final depth map is obtained by depth propagation combining user inputs, color and edge information. The experimental results demonstrate that our scheme is satisfactory in terms of both accuracy and efficiency, and thus can be applied for high quality 2D to 3D video conversion.