Snapshot video through dynamic scattering medium based on deep learning.

IF 3.2 2区 物理与天体物理 Q2 OPTICS
Optics express Pub Date : 2025-04-07 DOI:10.1364/OE.545510
Felipe Guzmán, Esteban Vera, Ryoichi Horisaki
{"title":"Snapshot video through dynamic scattering medium based on deep learning.","authors":"Felipe Guzmán, Esteban Vera, Ryoichi Horisaki","doi":"10.1364/OE.545510","DOIUrl":null,"url":null,"abstract":"<p><p>We present an end-to-end deep learning model designed to reconstruct up to eight frames from a single snapshot of a dynamic object passing through an unknown, time-varying scattering medium. Our approach integrates a coded aperture compressive temporal imaging system with a specially designed transformer-based convolutional neural network (CNN), optimized for effective demultiplexing and reconstruction. Both simulation and experimental results demonstrate a successful compression ratio of up to 8X, while maintaining high reconstruction quality. Furthermore, ablation studies reveal that our dual-input CNN model, which utilizes both speckle patterns and their autocorrelations, significantly improves reconstruction accuracy.</p>","PeriodicalId":19691,"journal":{"name":"Optics express","volume":"33 7","pages":"15991-16002"},"PeriodicalIF":3.2000,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics express","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1364/OE.545510","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0

Abstract

We present an end-to-end deep learning model designed to reconstruct up to eight frames from a single snapshot of a dynamic object passing through an unknown, time-varying scattering medium. Our approach integrates a coded aperture compressive temporal imaging system with a specially designed transformer-based convolutional neural network (CNN), optimized for effective demultiplexing and reconstruction. Both simulation and experimental results demonstrate a successful compression ratio of up to 8X, while maintaining high reconstruction quality. Furthermore, ablation studies reveal that our dual-input CNN model, which utilizes both speckle patterns and their autocorrelations, significantly improves reconstruction accuracy.

基于深度学习的动态散射介质快照视频。
我们提出了一个端到端深度学习模型,旨在从动态对象通过未知时变散射介质的单个快照重建多达8帧。我们的方法将编码孔径压缩时间成像系统与专门设计的基于变压器的卷积神经网络(CNN)集成在一起,该网络针对有效的解复用和重构进行了优化。仿真和实验结果表明,在保持高重建质量的同时,压缩比可达8倍。此外,烧蚀研究表明,我们的双输入CNN模型同时利用了散斑模式和它们的自相关性,显著提高了重建精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Optics express
Optics express 物理-光学
CiteScore
6.60
自引率
15.80%
发文量
5182
审稿时长
2.1 months
期刊介绍: Optics Express is the all-electronic, open access journal for optics providing rapid publication for peer-reviewed articles that emphasize scientific and technology innovations in all aspects of optics and photonics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信