Configurable multiple virtual lenses conjugated with singlet physical lens for achromatic extended depth-of-field imaging.

IF 3.2 2区 物理与天体物理 Q2 OPTICS
Optics express Pub Date : 2024-11-04 DOI:10.1364/OE.538670
Cuizhen Lu, Yuankun Liu, Tianyue He, Chongyang Zhang, Yilan Nan, Cui Huang, Junfei Shen
{"title":"Configurable multiple virtual lenses conjugated with singlet physical lens for achromatic extended depth-of-field imaging.","authors":"Cuizhen Lu, Yuankun Liu, Tianyue He, Chongyang Zhang, Yilan Nan, Cui Huang, Junfei Shen","doi":"10.1364/OE.538670","DOIUrl":null,"url":null,"abstract":"<p><p>An achromatic extended depth-of-field (EDOF) system can obtain clear scene information that is crucial for target recognition, dynamic monitoring, and other applications. However, the imaging performance of most optical systems is depth-variant and wavelength-variant, which leads to the generation of chromatic aberrations. Traditional optical design and image post-processing algorithms cannot effectively eliminate these chromatic aberrations. Here, we propose a deep configurable multiple virtual lenses optimization method that embeds four virtual lenses in parallel conjugated with a real lens. Combined with a lens fusion recovery network (LFRNet), it compensates for chromatic aberrations at different depths to achieve achromatic EDOF imaging. Trainable virtual optics can eliminate chromatic aberrations and overcome the limitations of traditional optics. The proposed framework reduces the optical design complexity and improves the imaging quality of a simple optical system. We validate our method using a singlet lens, and the experimental results show that the reconstructed images have an average peak signal-to-noise ratio (PSNR) improvement of 12.1447 dB and an average structural similarity index measure (SSIM) improvement of 0.2465. The proposed method opens a new avenue for ultra-compact, high-freedom, high-efficiency, and wholly configurable deep optics design, and empowers various advanced applications, such as portable photography and other complex vision tasks.</p>","PeriodicalId":19691,"journal":{"name":"Optics express","volume":"32 23","pages":"40427-40452"},"PeriodicalIF":3.2000,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics express","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1364/OE.538670","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0

Abstract

An achromatic extended depth-of-field (EDOF) system can obtain clear scene information that is crucial for target recognition, dynamic monitoring, and other applications. However, the imaging performance of most optical systems is depth-variant and wavelength-variant, which leads to the generation of chromatic aberrations. Traditional optical design and image post-processing algorithms cannot effectively eliminate these chromatic aberrations. Here, we propose a deep configurable multiple virtual lenses optimization method that embeds four virtual lenses in parallel conjugated with a real lens. Combined with a lens fusion recovery network (LFRNet), it compensates for chromatic aberrations at different depths to achieve achromatic EDOF imaging. Trainable virtual optics can eliminate chromatic aberrations and overcome the limitations of traditional optics. The proposed framework reduces the optical design complexity and improves the imaging quality of a simple optical system. We validate our method using a singlet lens, and the experimental results show that the reconstructed images have an average peak signal-to-noise ratio (PSNR) improvement of 12.1447 dB and an average structural similarity index measure (SSIM) improvement of 0.2465. The proposed method opens a new avenue for ultra-compact, high-freedom, high-efficiency, and wholly configurable deep optics design, and empowers various advanced applications, such as portable photography and other complex vision tasks.

可配置的多虚拟透镜与单物理透镜共轭,用于消色差扩展景深成像。
消色差扩展景深(EDOF)系统可以获得清晰的场景信息,这对目标识别、动态监控和其他应用至关重要。然而,大多数光学系统的成像性能是随深度和波长变化的,这会导致色差的产生。传统的光学设计和图像后处理算法无法有效消除这些色差。在此,我们提出了一种可深度配置的多虚拟透镜优化方法,该方法将四个虚拟透镜与一个真实透镜并联嵌入。该方法与透镜融合恢复网络(LFRNet)相结合,可补偿不同深度的色差,从而实现消色差 EDOF 成像。可训练虚拟光学可以消除色差,克服传统光学的局限性。所提出的框架降低了光学设计的复杂性,提高了简单光学系统的成像质量。实验结果表明,重建图像的平均峰值信噪比(PSNR)提高了 12.1447 dB,平均结构相似性指数(SSIM)提高了 0.2465。所提出的方法为超紧凑、高自由度、高效率和完全可配置的深度光学设计开辟了一条新途径,并为各种先进应用(如便携式摄影和其他复杂的视觉任务)提供了支持。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Optics express
Optics express 物理-光学
CiteScore
6.60
自引率
15.80%
发文量
5182
审稿时长
2.1 months
期刊介绍: Optics Express is the all-electronic, open access journal for optics providing rapid publication for peer-reviewed articles that emphasize scientific and technology innovations in all aspects of optics and photonics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信