Automatic 2D to 3D video and image conversion based on global depth map

Shelmy Mathai, Paul P. Mathai, K. Divya
{"title":"Automatic 2D to 3D video and image conversion based on global depth map","authors":"Shelmy Mathai, Paul P. Mathai, K. Divya","doi":"10.1109/ICCIC.2015.7435781","DOIUrl":null,"url":null,"abstract":"3d technology brings a new era of entertainment to the human race. It offered a wide array of possibilities in near future in almost every walk of life and in entertainment segment. 3D content generation is the important step in 3D systems. Special cameras such as stereoscopic dual camera, depth range camera etc. are designed to generate the 3D model of a scene directly. There are different techniques to generate the 3D content. But the problem is our current and past media data are in 2D which needs to convert into 3D. This is where the importance of 2D to 3D transformation arises. In this paper proposed real time 3D image and video creation by depth map estimation. Depth map estimation can be done in two methods. One is based on depth fusion method and other is based on saliency map of an image. In dataset image estimate the depth map from depth fusion method and then depth is refined by color spatial variance. In non dataset images we find depth map by global saliency method. Experimental result demonstrates that the proposed technique convey better performance compared to the state-of-the-art of methods.","PeriodicalId":276894,"journal":{"name":"2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCIC.2015.7435781","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

3d technology brings a new era of entertainment to the human race. It offered a wide array of possibilities in near future in almost every walk of life and in entertainment segment. 3D content generation is the important step in 3D systems. Special cameras such as stereoscopic dual camera, depth range camera etc. are designed to generate the 3D model of a scene directly. There are different techniques to generate the 3D content. But the problem is our current and past media data are in 2D which needs to convert into 3D. This is where the importance of 2D to 3D transformation arises. In this paper proposed real time 3D image and video creation by depth map estimation. Depth map estimation can be done in two methods. One is based on depth fusion method and other is based on saliency map of an image. In dataset image estimate the depth map from depth fusion method and then depth is refined by color spatial variance. In non dataset images we find depth map by global saliency method. Experimental result demonstrates that the proposed technique convey better performance compared to the state-of-the-art of methods.
基于全局深度图的自动2D到3D视频和图像转换
3d技术为人类带来了一个全新的娱乐时代。在不久的将来,它几乎在各行各业和娱乐领域提供了广泛的可能性。3D内容生成是3D系统的重要步骤。特殊的相机,如立体双摄像头、深度范围摄像头等,被设计用来直接生成场景的3D模型。生成3D内容有不同的技术。但问题是我们现在和过去的媒体数据都是2D的,需要转换成3D。这就是2D到3D转换的重要性所在。本文提出了一种基于深度图估计的实时三维图像和视频生成方法。深度图估计有两种方法。一种是基于深度融合的方法,另一种是基于图像的显著性映射。在数据集图像中,利用深度融合方法估计深度图,然后利用颜色空间方差对深度进行细化。在非数据集图像中,采用全局显著性方法求深度图。实验结果表明,与现有方法相比,该方法具有更好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信