Nelson Diaz, Madhu Beniwal, Miguel Marquez, Felipe Guzman, Cheng Jiang, Jinyang Liang, Esteban Vera
{"title":"Single-mask sphere-packing with implicit neural representation reconstruction for ultrahigh-speed imaging.","authors":"Nelson Diaz, Madhu Beniwal, Miguel Marquez, Felipe Guzman, Cheng Jiang, Jinyang Liang, Esteban Vera","doi":"10.1364/OE.561323","DOIUrl":null,"url":null,"abstract":"<p><p>Single-shot, high-speed 2D optical imaging is essential for studying transient phenomena in various research fields. Among existing techniques, compressed optical-streaking ultra-high-speed photography (COSUP) uses a coded aperture and a galvanometer scanner to capture non-repeatable time-evolving events at the 1.5 million-frame-per-second level. However, the use of a randomly coded aperture complicates the reconstruction process and introduces artifacts in the recovered videos. In contrast, non-multiplexing coded apertures simplify the reconstruction algorithm, allowing the recovery of longer videos from a snapshot. In this work, we design a non-multiplexing coded aperture for COSUP by exploiting the properties of congruent sphere packing (SP), which enables uniform space-time sampling given by the synergy between the galvanometer linear scanning and the optimal SP encoding patterns. We also develop an implicit neural representation-which can be self-trained from a single measurement-to not only largely reduce the training time and eliminate the need for training datasets but also reconstruct far more ultra-high-speed frames from a single measurement. The advantages of this proposed encoding and reconstruction scheme are verified by simulations and experimental results in a COSUP system.</p>","PeriodicalId":19691,"journal":{"name":"Optics express","volume":"33 11","pages":"24027-24038"},"PeriodicalIF":3.2000,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics express","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1364/OE.561323","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Single-shot, high-speed 2D optical imaging is essential for studying transient phenomena in various research fields. Among existing techniques, compressed optical-streaking ultra-high-speed photography (COSUP) uses a coded aperture and a galvanometer scanner to capture non-repeatable time-evolving events at the 1.5 million-frame-per-second level. However, the use of a randomly coded aperture complicates the reconstruction process and introduces artifacts in the recovered videos. In contrast, non-multiplexing coded apertures simplify the reconstruction algorithm, allowing the recovery of longer videos from a snapshot. In this work, we design a non-multiplexing coded aperture for COSUP by exploiting the properties of congruent sphere packing (SP), which enables uniform space-time sampling given by the synergy between the galvanometer linear scanning and the optimal SP encoding patterns. We also develop an implicit neural representation-which can be self-trained from a single measurement-to not only largely reduce the training time and eliminate the need for training datasets but also reconstruct far more ultra-high-speed frames from a single measurement. The advantages of this proposed encoding and reconstruction scheme are verified by simulations and experimental results in a COSUP system.
期刊介绍:
Optics Express is the all-electronic, open access journal for optics providing rapid publication for peer-reviewed articles that emphasize scientific and technology innovations in all aspects of optics and photonics.