{"title":"Cost-aware depth map estimation for Lytro camera","authors":"Min-Jung Kim, Tae-Hyun Oh, In-So Kweon","doi":"10.1109/ICIP.2014.7025006","DOIUrl":null,"url":null,"abstract":"Since commercial light field cameras became available, the light field camera has aroused much interest from computer vision and image processing communities due to its versatile functions. Most of its special features are based on an estimated depth map, so reliable depth estimation is a crucial step. However, estimating depth on real light field cameras is a challenging problem due to noise and short baselines among sub-aperture images. We propose a depth map estimation method for light field cameras by exploiting correspondence and focus cues. We aggregate costs among all the sub-aperture images on cost volume to alleviate noise effects. With efficiency of the cost volume, cost-aware depth estimation is quickly achieved by discrete-continuous optimization. In addition, we analyze each property of correspondence and focus cues and utilize them to select reliable anchor points. A well reconstructed initial depth map from the anchors is shown to enhance convergence. We show our method outperforms the state-of-the-art methods by validating it on real datasets acquired with a Lytro camera.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2014-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP.2014.7025006","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 18
Abstract
Since commercial light field cameras became available, the light field camera has aroused much interest from computer vision and image processing communities due to its versatile functions. Most of its special features are based on an estimated depth map, so reliable depth estimation is a crucial step. However, estimating depth on real light field cameras is a challenging problem due to noise and short baselines among sub-aperture images. We propose a depth map estimation method for light field cameras by exploiting correspondence and focus cues. We aggregate costs among all the sub-aperture images on cost volume to alleviate noise effects. With efficiency of the cost volume, cost-aware depth estimation is quickly achieved by discrete-continuous optimization. In addition, we analyze each property of correspondence and focus cues and utilize them to select reliable anchor points. A well reconstructed initial depth map from the anchors is shown to enhance convergence. We show our method outperforms the state-of-the-art methods by validating it on real datasets acquired with a Lytro camera.