{"title":"Depth map estimation from a single video sequence","authors":"Tien-Ying Kuo, Cheng-Hong Hsieh, Yi-Chung Lo","doi":"10.1109/ISCE.2013.6570130","DOIUrl":null,"url":null,"abstract":"The goal of this paper is to develop a robust depth estimation method from a single-view video sequence. We utilize an estimated initial depth to establish a reference depth for further obtaining the reliable depth information, and then it is refined with a temporal-spatial filter. At first, we use adaptive support-weight block matching to extract disparity information from consecutive video frames. The disparity is compensated with the camera motion and then transformed to the initially estimated depth. Based on the initial depth, two kinds of depth maps, the propagation depth and the optical flow depth can be established. Finally, these three depth maps are fused together by using voting merger, and then applied with the superpixel segmentation and a temporal-spatial smoothing filter to improve the noisy depth estimation in the textureless region. The experiments show that the proposed method could achieve visually pleasing and temporally consistent depth estimation results without additional pre-processing and time-consuming iterations as required in other works.","PeriodicalId":442380,"journal":{"name":"2013 IEEE International Symposium on Consumer Electronics (ISCE)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE International Symposium on Consumer Electronics (ISCE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCE.2013.6570130","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
The goal of this paper is to develop a robust depth estimation method from a single-view video sequence. We utilize an estimated initial depth to establish a reference depth for further obtaining the reliable depth information, and then it is refined with a temporal-spatial filter. At first, we use adaptive support-weight block matching to extract disparity information from consecutive video frames. The disparity is compensated with the camera motion and then transformed to the initially estimated depth. Based on the initial depth, two kinds of depth maps, the propagation depth and the optical flow depth can be established. Finally, these three depth maps are fused together by using voting merger, and then applied with the superpixel segmentation and a temporal-spatial smoothing filter to improve the noisy depth estimation in the textureless region. The experiments show that the proposed method could achieve visually pleasing and temporally consistent depth estimation results without additional pre-processing and time-consuming iterations as required in other works.