J. G. Marichal-Hernández, J. P. Luke, F. Rosa, F. Pérez Nava, J. Rodríguez-Ramos
{"title":"Fast approximate focal stack transform","authors":"J. G. Marichal-Hernández, J. P. Luke, F. Rosa, F. Pérez Nava, J. Rodríguez-Ramos","doi":"10.1109/3DTV.2009.5069644","DOIUrl":null,"url":null,"abstract":"In this work we develop a new algorithm, extending the Fast Digital Radon transform from Götz and Druckmüller (1996), that is capable of generating the approximate focal stack of a scene, previously measured with a plenoptic camera, with the minimum number of operations. This new algorithm does not require multiplications, just sums, and its computational complexity is O(N4) to achieve a volume consisting of 2N − 1 photographic planes focused at different depths, from a N4 light field. The method is close to real-time performance, and its output can be used to estimate the distances to objects of a scene.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DTV.2009.5069644","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
In this work we develop a new algorithm, extending the Fast Digital Radon transform from Götz and Druckmüller (1996), that is capable of generating the approximate focal stack of a scene, previously measured with a plenoptic camera, with the minimum number of operations. This new algorithm does not require multiplications, just sums, and its computational complexity is O(N4) to achieve a volume consisting of 2N − 1 photographic planes focused at different depths, from a N4 light field. The method is close to real-time performance, and its output can be used to estimate the distances to objects of a scene.