Compressed ultrafast photography (CUP): redefining the limit of passive ultrafast imaging (Conference Presentation)

SPIE BiOS Pub Date : 2016-06-28 DOI:10.1117/12.2211897
Liang Gao
{"title":"Compressed ultrafast photography (CUP): redefining the limit of passive ultrafast imaging (Conference Presentation)","authors":"Liang Gao","doi":"10.1117/12.2211897","DOIUrl":null,"url":null,"abstract":"Video recording of ultrafast phenomena using a detector array based on the CCD or CMOS technologies is fundamentally limited by the sensor’s on-chip storage and data transfer speed. To get around this problem, the most practical approach is to utilize a streak camera. However, the resultant image is normally one dimensional—only a line of the scene can be seen at a time. Acquiring a two-dimensional image thus requires mechanical scanning across the entire field of view. This requirement poses severe restrictions on the applicable scenes because the event itself must be repetitive. To overcome these limitations, we have developed a new computational ultrafast imaging method, referred to as compressed ultrafast photography (CUP), which can capture two-dimensional dynamic scenes at up to 100 billion frames per second. Based on the concept of compressed sensing, CUP works by encoding the input scene with a random binary pattern in the spatial domain, followed by shearing the resultant image in a streak camera with a fully-opened entrance slit. The image reconstruction is the solution of the inverse problem of above processes. Given sparsity in the spatiotemporal domain, the original event datacube can be reasonably estimated by employing a two-step iterative shrinkage/thresholding algorithm. To demonstrate CUP, we imaged light reflection, refraction, and racing in two different media (air and resin). Our technique, for the first time, enables video recording of photon propagation at a temporal resolution down to tens of picoseconds. Moreover, to further expand CUP’s functionality, we added a color separation unit to the system, thereby allowing simultaneous acquisition of a four-dimensional datacube (x,y,t,λ), where λ is wavelength, within a single camera snapshot.","PeriodicalId":227483,"journal":{"name":"SPIE BiOS","volume":"101 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SPIE BiOS","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2211897","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Video recording of ultrafast phenomena using a detector array based on the CCD or CMOS technologies is fundamentally limited by the sensor’s on-chip storage and data transfer speed. To get around this problem, the most practical approach is to utilize a streak camera. However, the resultant image is normally one dimensional—only a line of the scene can be seen at a time. Acquiring a two-dimensional image thus requires mechanical scanning across the entire field of view. This requirement poses severe restrictions on the applicable scenes because the event itself must be repetitive. To overcome these limitations, we have developed a new computational ultrafast imaging method, referred to as compressed ultrafast photography (CUP), which can capture two-dimensional dynamic scenes at up to 100 billion frames per second. Based on the concept of compressed sensing, CUP works by encoding the input scene with a random binary pattern in the spatial domain, followed by shearing the resultant image in a streak camera with a fully-opened entrance slit. The image reconstruction is the solution of the inverse problem of above processes. Given sparsity in the spatiotemporal domain, the original event datacube can be reasonably estimated by employing a two-step iterative shrinkage/thresholding algorithm. To demonstrate CUP, we imaged light reflection, refraction, and racing in two different media (air and resin). Our technique, for the first time, enables video recording of photon propagation at a temporal resolution down to tens of picoseconds. Moreover, to further expand CUP’s functionality, we added a color separation unit to the system, thereby allowing simultaneous acquisition of a four-dimensional datacube (x,y,t,λ), where λ is wavelength, within a single camera snapshot.
压缩超快摄影(CUP):重新定义被动超快成像的极限(会议报告)
使用基于CCD或CMOS技术的探测器阵列对超快现象进行视频记录,从根本上受到传感器片上存储和数据传输速度的限制。为了解决这个问题,最实用的方法是利用条纹相机。然而,生成的图像通常是一维的——一次只能看到场景的一行。因此,获取二维图像需要对整个视场进行机械扫描。由于事件本身必须是重复的,因此这一要求对适用场景造成了严重的限制。为了克服这些限制,我们开发了一种新的计算超快成像方法,称为压缩超快摄影(CUP),它可以以每秒高达1000亿帧的速度捕捉二维动态场景。基于压缩感知的概念,CUP的工作原理是在空间域中用随机二进制模式对输入场景进行编码,然后在具有全开入口狭缝的条纹相机中剪切生成的图像。图像重建是上述过程的逆问题的解。考虑到时空域的稀疏性,可以采用两步迭代收缩/阈值算法对原始事件数据立方体进行合理估计。为了演示CUP,我们对两种不同介质(空气和树脂)中的光反射、折射和竞速进行了成像。我们的技术首次实现了以几十皮秒的时间分辨率记录光子传播的视频。此外,为了进一步扩展CUP的功能,我们在系统中添加了一个分色单元,从而允许在单个相机快照中同时获取四维数据立方体(x,y,t,λ),其中λ是波长。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信