{"title":"Disparity Estimation for Focused Light Field Camera Using Cost Aggregation in Micro-Images","authors":"Zhi-Ping Ding, Qian Liu, Qing Wang","doi":"10.1109/ICVRV.2017.00083","DOIUrl":null,"url":null,"abstract":"Unlike conventional light field camera that records spatial and angular information explicitly, the focused light field camera implicitly collects angular samplings in microimages behind the micro-lens array. Without directly decoded sub-apertures, it is difficult to estimate disparity for focused light field camera. On the other hand, disparity estimation is a critical step for sub-aperture rendering from raw image. It is hence a typical \"chicken-and-egg\" problem. In this paper we propose a two-stage method for disparity estimation from the raw image. Compared with previous approaches which treat all pixels in a micro-image as a same disparity label, a segmentation-tree based cost aggregation is introduced to provide a more robust disparity estimation for each pixel, which optimizes the disparity of low-texture areas and yields sharper occlusion boundaries. After sub-apertures are rendered from the raw image using initial estimation, the optimal one is globally regularized using the reference sub-aperture image. Experimental results on real scene datasets have demonstrated advantages of our method over previous work, especially in low-texture areas and occlusion boundaries.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICVRV.2017.00083","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Unlike conventional light field camera that records spatial and angular information explicitly, the focused light field camera implicitly collects angular samplings in microimages behind the micro-lens array. Without directly decoded sub-apertures, it is difficult to estimate disparity for focused light field camera. On the other hand, disparity estimation is a critical step for sub-aperture rendering from raw image. It is hence a typical "chicken-and-egg" problem. In this paper we propose a two-stage method for disparity estimation from the raw image. Compared with previous approaches which treat all pixels in a micro-image as a same disparity label, a segmentation-tree based cost aggregation is introduced to provide a more robust disparity estimation for each pixel, which optimizes the disparity of low-texture areas and yields sharper occlusion boundaries. After sub-apertures are rendered from the raw image using initial estimation, the optimal one is globally regularized using the reference sub-aperture image. Experimental results on real scene datasets have demonstrated advantages of our method over previous work, especially in low-texture areas and occlusion boundaries.