使用类似立体分类的光场深度图估计

F. Calderon, C. Parra, Cesar L. Nino
{"title":"使用类似立体分类的光场深度图估计","authors":"F. Calderon, C. Parra, Cesar L. Nino","doi":"10.1109/STSIVA.2014.7010131","DOIUrl":null,"url":null,"abstract":"The light field or LF is a function that describes the amount of light traveling in every direction (angular) through every point (spatial) in a scene, this LF can be captured in several ways, using arrays of cameras, or more recently using a single camera with an special lens, that allows the capture of angular and spatial information of light rays of a scene (LF). This recent camera implementation gives a different approach to find the dept of a scene using only a single camera. In order to estimate the depth, we describe a taxonomy, similar to the one used in stereo Depth-map algorithms. That consist in the creation of a cost tensor to represent the matching cost between different disparities, then, using a support weight window, aggregate the cost tensor, finally, using a winner-takes-all optimization algorithm, search for the best disparities. This paper explains in detail the several changes made to an stereo-like taxonomy, to be applied in a light field, and evaluate this algorithm using a recent database that for the first time, provides several ground-truth light fields, with a respective ground-truth depth map.","PeriodicalId":114554,"journal":{"name":"2014 XIX Symposium on Image, Signal Processing and Artificial Vision","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Depth map estimation in light fields using an stereo-like taxonomy\",\"authors\":\"F. Calderon, C. Parra, Cesar L. Nino\",\"doi\":\"10.1109/STSIVA.2014.7010131\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The light field or LF is a function that describes the amount of light traveling in every direction (angular) through every point (spatial) in a scene, this LF can be captured in several ways, using arrays of cameras, or more recently using a single camera with an special lens, that allows the capture of angular and spatial information of light rays of a scene (LF). This recent camera implementation gives a different approach to find the dept of a scene using only a single camera. In order to estimate the depth, we describe a taxonomy, similar to the one used in stereo Depth-map algorithms. That consist in the creation of a cost tensor to represent the matching cost between different disparities, then, using a support weight window, aggregate the cost tensor, finally, using a winner-takes-all optimization algorithm, search for the best disparities. This paper explains in detail the several changes made to an stereo-like taxonomy, to be applied in a light field, and evaluate this algorithm using a recent database that for the first time, provides several ground-truth light fields, with a respective ground-truth depth map.\",\"PeriodicalId\":114554,\"journal\":{\"name\":\"2014 XIX Symposium on Image, Signal Processing and Artificial Vision\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 XIX Symposium on Image, Signal Processing and Artificial Vision\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/STSIVA.2014.7010131\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 XIX Symposium on Image, Signal Processing and Artificial Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/STSIVA.2014.7010131","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

摘要

光场(LF)是描述在场景中每个方向(角度)通过每个点(空间)的光量的函数,这个LF可以通过几种方式捕获,使用相机阵列,或者最近使用带有特殊镜头的单个相机,可以捕获场景光线的角度和空间信息(LF)。最近的相机实现提供了一种不同的方法来查找场景的深度,仅使用单个相机。为了估计深度,我们描述了一种分类,类似于立体深度图算法中使用的分类。这包括创建一个代价张量来表示不同差异之间的匹配代价,然后,使用一个支持权重窗口,聚合代价张量,最后,使用赢者通吃的优化算法,搜索最佳差异。本文详细解释了将在光场中应用的类立体分类法所做的一些改变,并使用最近的数据库对该算法进行了评估,该数据库首次提供了几个地真光场,并提供了各自的地真深度图。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Depth map estimation in light fields using an stereo-like taxonomy
The light field or LF is a function that describes the amount of light traveling in every direction (angular) through every point (spatial) in a scene, this LF can be captured in several ways, using arrays of cameras, or more recently using a single camera with an special lens, that allows the capture of angular and spatial information of light rays of a scene (LF). This recent camera implementation gives a different approach to find the dept of a scene using only a single camera. In order to estimate the depth, we describe a taxonomy, similar to the one used in stereo Depth-map algorithms. That consist in the creation of a cost tensor to represent the matching cost between different disparities, then, using a support weight window, aggregate the cost tensor, finally, using a winner-takes-all optimization algorithm, search for the best disparities. This paper explains in detail the several changes made to an stereo-like taxonomy, to be applied in a light field, and evaluate this algorithm using a recent database that for the first time, provides several ground-truth light fields, with a respective ground-truth depth map.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信