基于隐式神经表征重建的超高速成像单掩模球体填充。

IF 3.2 2区 物理与天体物理 Q2 OPTICS
Optics express Pub Date : 2025-06-02 DOI:10.1364/OE.561323
Nelson Diaz, Madhu Beniwal, Miguel Marquez, Felipe Guzman, Cheng Jiang, Jinyang Liang, Esteban Vera
{"title":"基于隐式神经表征重建的超高速成像单掩模球体填充。","authors":"Nelson Diaz, Madhu Beniwal, Miguel Marquez, Felipe Guzman, Cheng Jiang, Jinyang Liang, Esteban Vera","doi":"10.1364/OE.561323","DOIUrl":null,"url":null,"abstract":"<p><p>Single-shot, high-speed 2D optical imaging is essential for studying transient phenomena in various research fields. Among existing techniques, compressed optical-streaking ultra-high-speed photography (COSUP) uses a coded aperture and a galvanometer scanner to capture non-repeatable time-evolving events at the 1.5 million-frame-per-second level. However, the use of a randomly coded aperture complicates the reconstruction process and introduces artifacts in the recovered videos. In contrast, non-multiplexing coded apertures simplify the reconstruction algorithm, allowing the recovery of longer videos from a snapshot. In this work, we design a non-multiplexing coded aperture for COSUP by exploiting the properties of congruent sphere packing (SP), which enables uniform space-time sampling given by the synergy between the galvanometer linear scanning and the optimal SP encoding patterns. We also develop an implicit neural representation-which can be self-trained from a single measurement-to not only largely reduce the training time and eliminate the need for training datasets but also reconstruct far more ultra-high-speed frames from a single measurement. The advantages of this proposed encoding and reconstruction scheme are verified by simulations and experimental results in a COSUP system.</p>","PeriodicalId":19691,"journal":{"name":"Optics express","volume":"33 11","pages":"24027-24038"},"PeriodicalIF":3.2000,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Single-mask sphere-packing with implicit neural representation reconstruction for ultrahigh-speed imaging.\",\"authors\":\"Nelson Diaz, Madhu Beniwal, Miguel Marquez, Felipe Guzman, Cheng Jiang, Jinyang Liang, Esteban Vera\",\"doi\":\"10.1364/OE.561323\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Single-shot, high-speed 2D optical imaging is essential for studying transient phenomena in various research fields. Among existing techniques, compressed optical-streaking ultra-high-speed photography (COSUP) uses a coded aperture and a galvanometer scanner to capture non-repeatable time-evolving events at the 1.5 million-frame-per-second level. However, the use of a randomly coded aperture complicates the reconstruction process and introduces artifacts in the recovered videos. In contrast, non-multiplexing coded apertures simplify the reconstruction algorithm, allowing the recovery of longer videos from a snapshot. In this work, we design a non-multiplexing coded aperture for COSUP by exploiting the properties of congruent sphere packing (SP), which enables uniform space-time sampling given by the synergy between the galvanometer linear scanning and the optimal SP encoding patterns. We also develop an implicit neural representation-which can be self-trained from a single measurement-to not only largely reduce the training time and eliminate the need for training datasets but also reconstruct far more ultra-high-speed frames from a single measurement. The advantages of this proposed encoding and reconstruction scheme are verified by simulations and experimental results in a COSUP system.</p>\",\"PeriodicalId\":19691,\"journal\":{\"name\":\"Optics express\",\"volume\":\"33 11\",\"pages\":\"24027-24038\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-06-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optics express\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://doi.org/10.1364/OE.561323\",\"RegionNum\":2,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OPTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics express","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1364/OE.561323","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0

摘要

单镜头、高速二维光学成像是研究各种研究领域瞬态现象所必需的。在现有技术中,压缩光学条纹超高速摄影(COSUP)使用编码孔径和振镜扫描仪以每秒150万帧的速度捕捉不可重复的时间演变事件。然而,使用随机编码的孔径使重建过程复杂化,并在恢复的视频中引入伪影。相比之下,非多路复用编码孔径简化了重建算法,允许从快照中恢复较长的视频。在这项工作中,我们利用同余球填充(SP)的特性,设计了一个用于COSUP的非复用编码孔径,该孔径通过振镜线性扫描和最佳SP编码模式之间的协同作用,实现了均匀的时空采样。我们还开发了一种隐式神经表示,它可以从单个测量中自我训练,不仅大大减少了训练时间,消除了对训练数据集的需求,而且还从单个测量中重建了更多的超高速帧。在COSUP系统中的仿真和实验结果验证了该编码和重构方案的优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Single-mask sphere-packing with implicit neural representation reconstruction for ultrahigh-speed imaging.

Single-shot, high-speed 2D optical imaging is essential for studying transient phenomena in various research fields. Among existing techniques, compressed optical-streaking ultra-high-speed photography (COSUP) uses a coded aperture and a galvanometer scanner to capture non-repeatable time-evolving events at the 1.5 million-frame-per-second level. However, the use of a randomly coded aperture complicates the reconstruction process and introduces artifacts in the recovered videos. In contrast, non-multiplexing coded apertures simplify the reconstruction algorithm, allowing the recovery of longer videos from a snapshot. In this work, we design a non-multiplexing coded aperture for COSUP by exploiting the properties of congruent sphere packing (SP), which enables uniform space-time sampling given by the synergy between the galvanometer linear scanning and the optimal SP encoding patterns. We also develop an implicit neural representation-which can be self-trained from a single measurement-to not only largely reduce the training time and eliminate the need for training datasets but also reconstruct far more ultra-high-speed frames from a single measurement. The advantages of this proposed encoding and reconstruction scheme are verified by simulations and experimental results in a COSUP system.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Optics express
Optics express 物理-光学
CiteScore
6.60
自引率
15.80%
发文量
5182
审稿时长
2.1 months
期刊介绍: Optics Express is the all-electronic, open access journal for optics providing rapid publication for peer-reviewed articles that emphasize scientific and technology innovations in all aspects of optics and photonics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信