{"title":"压缩超快摄影(CUP):重新定义被动超快成像的极限(会议报告)","authors":"Liang Gao","doi":"10.1117/12.2211897","DOIUrl":null,"url":null,"abstract":"Video recording of ultrafast phenomena using a detector array based on the CCD or CMOS technologies is fundamentally limited by the sensor’s on-chip storage and data transfer speed. To get around this problem, the most practical approach is to utilize a streak camera. However, the resultant image is normally one dimensional—only a line of the scene can be seen at a time. Acquiring a two-dimensional image thus requires mechanical scanning across the entire field of view. This requirement poses severe restrictions on the applicable scenes because the event itself must be repetitive. To overcome these limitations, we have developed a new computational ultrafast imaging method, referred to as compressed ultrafast photography (CUP), which can capture two-dimensional dynamic scenes at up to 100 billion frames per second. Based on the concept of compressed sensing, CUP works by encoding the input scene with a random binary pattern in the spatial domain, followed by shearing the resultant image in a streak camera with a fully-opened entrance slit. The image reconstruction is the solution of the inverse problem of above processes. Given sparsity in the spatiotemporal domain, the original event datacube can be reasonably estimated by employing a two-step iterative shrinkage/thresholding algorithm. To demonstrate CUP, we imaged light reflection, refraction, and racing in two different media (air and resin). Our technique, for the first time, enables video recording of photon propagation at a temporal resolution down to tens of picoseconds. Moreover, to further expand CUP’s functionality, we added a color separation unit to the system, thereby allowing simultaneous acquisition of a four-dimensional datacube (x,y,t,λ), where λ is wavelength, within a single camera snapshot.","PeriodicalId":227483,"journal":{"name":"SPIE BiOS","volume":"101 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Compressed ultrafast photography (CUP): redefining the limit of passive ultrafast imaging (Conference Presentation)\",\"authors\":\"Liang Gao\",\"doi\":\"10.1117/12.2211897\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Video recording of ultrafast phenomena using a detector array based on the CCD or CMOS technologies is fundamentally limited by the sensor’s on-chip storage and data transfer speed. To get around this problem, the most practical approach is to utilize a streak camera. However, the resultant image is normally one dimensional—only a line of the scene can be seen at a time. Acquiring a two-dimensional image thus requires mechanical scanning across the entire field of view. This requirement poses severe restrictions on the applicable scenes because the event itself must be repetitive. To overcome these limitations, we have developed a new computational ultrafast imaging method, referred to as compressed ultrafast photography (CUP), which can capture two-dimensional dynamic scenes at up to 100 billion frames per second. Based on the concept of compressed sensing, CUP works by encoding the input scene with a random binary pattern in the spatial domain, followed by shearing the resultant image in a streak camera with a fully-opened entrance slit. The image reconstruction is the solution of the inverse problem of above processes. Given sparsity in the spatiotemporal domain, the original event datacube can be reasonably estimated by employing a two-step iterative shrinkage/thresholding algorithm. To demonstrate CUP, we imaged light reflection, refraction, and racing in two different media (air and resin). Our technique, for the first time, enables video recording of photon propagation at a temporal resolution down to tens of picoseconds. Moreover, to further expand CUP’s functionality, we added a color separation unit to the system, thereby allowing simultaneous acquisition of a four-dimensional datacube (x,y,t,λ), where λ is wavelength, within a single camera snapshot.\",\"PeriodicalId\":227483,\"journal\":{\"name\":\"SPIE BiOS\",\"volume\":\"101 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-06-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SPIE BiOS\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.2211897\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SPIE BiOS","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2211897","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Compressed ultrafast photography (CUP): redefining the limit of passive ultrafast imaging (Conference Presentation)
Video recording of ultrafast phenomena using a detector array based on the CCD or CMOS technologies is fundamentally limited by the sensor’s on-chip storage and data transfer speed. To get around this problem, the most practical approach is to utilize a streak camera. However, the resultant image is normally one dimensional—only a line of the scene can be seen at a time. Acquiring a two-dimensional image thus requires mechanical scanning across the entire field of view. This requirement poses severe restrictions on the applicable scenes because the event itself must be repetitive. To overcome these limitations, we have developed a new computational ultrafast imaging method, referred to as compressed ultrafast photography (CUP), which can capture two-dimensional dynamic scenes at up to 100 billion frames per second. Based on the concept of compressed sensing, CUP works by encoding the input scene with a random binary pattern in the spatial domain, followed by shearing the resultant image in a streak camera with a fully-opened entrance slit. The image reconstruction is the solution of the inverse problem of above processes. Given sparsity in the spatiotemporal domain, the original event datacube can be reasonably estimated by employing a two-step iterative shrinkage/thresholding algorithm. To demonstrate CUP, we imaged light reflection, refraction, and racing in two different media (air and resin). Our technique, for the first time, enables video recording of photon propagation at a temporal resolution down to tens of picoseconds. Moreover, to further expand CUP’s functionality, we added a color separation unit to the system, thereby allowing simultaneous acquisition of a four-dimensional datacube (x,y,t,λ), where λ is wavelength, within a single camera snapshot.