Deconvolving Diffraction for Fast Imaging of Sparse Scenes

Mark Sheinin, Matthew O'Toole, S. Narasimhan
{"title":"Deconvolving Diffraction for Fast Imaging of Sparse Scenes","authors":"Mark Sheinin, Matthew O'Toole, S. Narasimhan","doi":"10.1109/ICCP51581.2021.9466266","DOIUrl":null,"url":null,"abstract":"Most computer vision techniques rely on cameras which uniformly sample the 2D image plane. However, there exists a class of applications for which the standard uniform 2D sampling of the image plane is sub-optimal. This class consists of applications where the scene points of interest occupy the image plane sparsely (e.g., marker-based motion capture), and thus most pixels of the 2D camera sensor would be wasted. Recently, diffractive optics were used in conjunction with sparse (e.g., line) sensors to achieve high-speed capture of such sparse scenes. One such approach, called “Diffraction Line Imaging”, relies on the use of diffraction gratings to spread the point-spread-function (PSF) of scene points from a point to a color-coded shape (e.g., a horizontal line) whose intersection with a line sensor enables point positioning. In this paper, we extend this approach for arbitrary diffractive optical elements and arbitrary sampling of the sensor plane using a convolution-based image formation model. Sparse scenes are then recovered by formulating a convolutional coding inverse problem that can resolve mixtures of diffraction PSFs without the use of multiple sensors, extending the application of diffraction-based imaging to a new class of significantly denser scenes. For the case of a single-axis diffraction grating, we provide an approach to determine the minimal required sensor sub-sampling for accurate scene recovery. Compared to methods that use a speckle PSF from a narrow-band source or a diffuser-based PSF with a rolling shutter sensor, our approach uses spectrally-coded PSFs from broad-band sources and allows arbitrary sensor sampling, respectively. We demonstrate that the presented combination of the imaging approach and scene recovery method is well suited for high-speed marker based motion capture and particle image velocimetry (PIV) over long periods.","PeriodicalId":132124,"journal":{"name":"2021 IEEE International Conference on Computational Photography (ICCP)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Computational Photography (ICCP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCP51581.2021.9466266","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Most computer vision techniques rely on cameras which uniformly sample the 2D image plane. However, there exists a class of applications for which the standard uniform 2D sampling of the image plane is sub-optimal. This class consists of applications where the scene points of interest occupy the image plane sparsely (e.g., marker-based motion capture), and thus most pixels of the 2D camera sensor would be wasted. Recently, diffractive optics were used in conjunction with sparse (e.g., line) sensors to achieve high-speed capture of such sparse scenes. One such approach, called “Diffraction Line Imaging”, relies on the use of diffraction gratings to spread the point-spread-function (PSF) of scene points from a point to a color-coded shape (e.g., a horizontal line) whose intersection with a line sensor enables point positioning. In this paper, we extend this approach for arbitrary diffractive optical elements and arbitrary sampling of the sensor plane using a convolution-based image formation model. Sparse scenes are then recovered by formulating a convolutional coding inverse problem that can resolve mixtures of diffraction PSFs without the use of multiple sensors, extending the application of diffraction-based imaging to a new class of significantly denser scenes. For the case of a single-axis diffraction grating, we provide an approach to determine the minimal required sensor sub-sampling for accurate scene recovery. Compared to methods that use a speckle PSF from a narrow-band source or a diffuser-based PSF with a rolling shutter sensor, our approach uses spectrally-coded PSFs from broad-band sources and allows arbitrary sensor sampling, respectively. We demonstrate that the presented combination of the imaging approach and scene recovery method is well suited for high-speed marker based motion capture and particle image velocimetry (PIV) over long periods.
稀疏场景快速成像的反卷积衍射
大多数计算机视觉技术都依赖于对二维图像平面进行均匀采样的摄像机。然而,存在一类应用,其中图像平面的标准均匀二维采样是次优的。该类由应用程序,其中感兴趣的场景点占用图像平面稀疏(例如,基于标记的运动捕捉),因此大多数像素的2D相机传感器将被浪费。最近,衍射光学与稀疏(如线)传感器结合使用,以实现这种稀疏场景的高速捕获。其中一种方法称为“衍射线成像”,依靠使用衍射光栅将场景点的点扩展函数(PSF)从一个点扩展到一个颜色编码的形状(例如,一条水平线),该形状与线传感器相交,从而实现点定位。在本文中,我们使用基于卷积的图像形成模型将该方法扩展到任意衍射光学元件和传感器平面的任意采样。然后通过制定卷积编码逆问题来恢复稀疏场景,该问题可以在不使用多个传感器的情况下解决衍射psf的混合物,从而将基于衍射的成像的应用扩展到一类显着密集的新场景。对于单轴衍射光栅的情况,我们提供了一种方法来确定精确场景恢复所需的最小传感器子采样。与使用窄带源的散斑PSF或带滚动快门传感器的基于扩散器的PSF的方法相比,我们的方法使用来自宽带源的频谱编码PSF,并允许分别进行任意传感器采样。我们证明了所提出的成像方法和场景恢复方法的组合非常适合于长时间的基于高速标记的运动捕捉和粒子图像测速(PIV)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信