{"title":"Deconvolving Diffraction for Fast Imaging of Sparse Scenes","authors":"Mark Sheinin, Matthew O'Toole, S. Narasimhan","doi":"10.1109/ICCP51581.2021.9466266","DOIUrl":null,"url":null,"abstract":"Most computer vision techniques rely on cameras which uniformly sample the 2D image plane. However, there exists a class of applications for which the standard uniform 2D sampling of the image plane is sub-optimal. This class consists of applications where the scene points of interest occupy the image plane sparsely (e.g., marker-based motion capture), and thus most pixels of the 2D camera sensor would be wasted. Recently, diffractive optics were used in conjunction with sparse (e.g., line) sensors to achieve high-speed capture of such sparse scenes. One such approach, called “Diffraction Line Imaging”, relies on the use of diffraction gratings to spread the point-spread-function (PSF) of scene points from a point to a color-coded shape (e.g., a horizontal line) whose intersection with a line sensor enables point positioning. In this paper, we extend this approach for arbitrary diffractive optical elements and arbitrary sampling of the sensor plane using a convolution-based image formation model. Sparse scenes are then recovered by formulating a convolutional coding inverse problem that can resolve mixtures of diffraction PSFs without the use of multiple sensors, extending the application of diffraction-based imaging to a new class of significantly denser scenes. For the case of a single-axis diffraction grating, we provide an approach to determine the minimal required sensor sub-sampling for accurate scene recovery. Compared to methods that use a speckle PSF from a narrow-band source or a diffuser-based PSF with a rolling shutter sensor, our approach uses spectrally-coded PSFs from broad-band sources and allows arbitrary sensor sampling, respectively. We demonstrate that the presented combination of the imaging approach and scene recovery method is well suited for high-speed marker based motion capture and particle image velocimetry (PIV) over long periods.","PeriodicalId":132124,"journal":{"name":"2021 IEEE International Conference on Computational Photography (ICCP)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Computational Photography (ICCP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCP51581.2021.9466266","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Most computer vision techniques rely on cameras which uniformly sample the 2D image plane. However, there exists a class of applications for which the standard uniform 2D sampling of the image plane is sub-optimal. This class consists of applications where the scene points of interest occupy the image plane sparsely (e.g., marker-based motion capture), and thus most pixels of the 2D camera sensor would be wasted. Recently, diffractive optics were used in conjunction with sparse (e.g., line) sensors to achieve high-speed capture of such sparse scenes. One such approach, called “Diffraction Line Imaging”, relies on the use of diffraction gratings to spread the point-spread-function (PSF) of scene points from a point to a color-coded shape (e.g., a horizontal line) whose intersection with a line sensor enables point positioning. In this paper, we extend this approach for arbitrary diffractive optical elements and arbitrary sampling of the sensor plane using a convolution-based image formation model. Sparse scenes are then recovered by formulating a convolutional coding inverse problem that can resolve mixtures of diffraction PSFs without the use of multiple sensors, extending the application of diffraction-based imaging to a new class of significantly denser scenes. For the case of a single-axis diffraction grating, we provide an approach to determine the minimal required sensor sub-sampling for accurate scene recovery. Compared to methods that use a speckle PSF from a narrow-band source or a diffuser-based PSF with a rolling shutter sensor, our approach uses spectrally-coded PSFs from broad-band sources and allows arbitrary sensor sampling, respectively. We demonstrate that the presented combination of the imaging approach and scene recovery method is well suited for high-speed marker based motion capture and particle image velocimetry (PIV) over long periods.