{"title":"多个动态计算虚拟透镜,以克服消色差扩展景深成像的带宽限制","authors":"Cuizhen Lu, Yuankun Liu, Tianyue He, Chongyang Zhang, Yilan Nan, Cui Huang, Junfei Shen","doi":"10.1016/j.optcom.2025.132460","DOIUrl":null,"url":null,"abstract":"<div><div>The Achromatic extended depth-of-field (AEDOF) system can achieve high-fidelity imaging, greatly benefiting fields such as microscopy and biomedical imaging. However, due to depth-variant and wavelength-variant imaging performance, traditional optical designs struggle to cover the entire depth range of interest, where severe bandwidth limit exists. Here, we propose a method of learning virtual lenses (VLs) to beat the optical limits of the singlet lens, and construct a hybrid real-virtual system to obtain broadband AEDOF images. By positioning object imaging depths, the multiple VLs are adaptively embedded in parallel and conjugated with the singlet lens to compensate for imaging differences of these depths. Sequential depth-dependent achromatic images are produced by VLs and fused to recover high-quality image. Comparing to the input image, our method demonstrates an average improvement of 12.3907 dB in Peak Signal-to-Noise Ratio (PSNR), and 0.2437 in Structural Similarity Index Measure (SSIM). Learning-based VLs can dynamically compensate for the real lens, overcoming the bandwidth limits of traditional optics and successfully realizing ultracompact AEDOF imaging. The proposed method provides a feasible solution for configurable computational imaging, allowing for the creation of accurate mapping between high-fidelity images and a single large aperture meta-optic. Fundamentally, our work opens a new avenue for application in portable and mobile photography.</div></div>","PeriodicalId":19586,"journal":{"name":"Optics Communications","volume":"596 ","pages":"Article 132460"},"PeriodicalIF":2.5000,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multiple dynamic computational virtual lenses to beat bandwidth limits for achromatic extended depth-of-field imaging\",\"authors\":\"Cuizhen Lu, Yuankun Liu, Tianyue He, Chongyang Zhang, Yilan Nan, Cui Huang, Junfei Shen\",\"doi\":\"10.1016/j.optcom.2025.132460\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The Achromatic extended depth-of-field (AEDOF) system can achieve high-fidelity imaging, greatly benefiting fields such as microscopy and biomedical imaging. However, due to depth-variant and wavelength-variant imaging performance, traditional optical designs struggle to cover the entire depth range of interest, where severe bandwidth limit exists. Here, we propose a method of learning virtual lenses (VLs) to beat the optical limits of the singlet lens, and construct a hybrid real-virtual system to obtain broadband AEDOF images. By positioning object imaging depths, the multiple VLs are adaptively embedded in parallel and conjugated with the singlet lens to compensate for imaging differences of these depths. Sequential depth-dependent achromatic images are produced by VLs and fused to recover high-quality image. Comparing to the input image, our method demonstrates an average improvement of 12.3907 dB in Peak Signal-to-Noise Ratio (PSNR), and 0.2437 in Structural Similarity Index Measure (SSIM). Learning-based VLs can dynamically compensate for the real lens, overcoming the bandwidth limits of traditional optics and successfully realizing ultracompact AEDOF imaging. The proposed method provides a feasible solution for configurable computational imaging, allowing for the creation of accurate mapping between high-fidelity images and a single large aperture meta-optic. Fundamentally, our work opens a new avenue for application in portable and mobile photography.</div></div>\",\"PeriodicalId\":19586,\"journal\":{\"name\":\"Optics Communications\",\"volume\":\"596 \",\"pages\":\"Article 132460\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2025-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optics Communications\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0030401825009885\",\"RegionNum\":3,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OPTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics Communications","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0030401825009885","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPTICS","Score":null,"Total":0}
Multiple dynamic computational virtual lenses to beat bandwidth limits for achromatic extended depth-of-field imaging
The Achromatic extended depth-of-field (AEDOF) system can achieve high-fidelity imaging, greatly benefiting fields such as microscopy and biomedical imaging. However, due to depth-variant and wavelength-variant imaging performance, traditional optical designs struggle to cover the entire depth range of interest, where severe bandwidth limit exists. Here, we propose a method of learning virtual lenses (VLs) to beat the optical limits of the singlet lens, and construct a hybrid real-virtual system to obtain broadband AEDOF images. By positioning object imaging depths, the multiple VLs are adaptively embedded in parallel and conjugated with the singlet lens to compensate for imaging differences of these depths. Sequential depth-dependent achromatic images are produced by VLs and fused to recover high-quality image. Comparing to the input image, our method demonstrates an average improvement of 12.3907 dB in Peak Signal-to-Noise Ratio (PSNR), and 0.2437 in Structural Similarity Index Measure (SSIM). Learning-based VLs can dynamically compensate for the real lens, overcoming the bandwidth limits of traditional optics and successfully realizing ultracompact AEDOF imaging. The proposed method provides a feasible solution for configurable computational imaging, allowing for the creation of accurate mapping between high-fidelity images and a single large aperture meta-optic. Fundamentally, our work opens a new avenue for application in portable and mobile photography.
期刊介绍:
Optics Communications invites original and timely contributions containing new results in various fields of optics and photonics. The journal considers theoretical and experimental research in areas ranging from the fundamental properties of light to technological applications. Topics covered include classical and quantum optics, optical physics and light-matter interactions, lasers, imaging, guided-wave optics and optical information processing. Manuscripts should offer clear evidence of novelty and significance. Papers concentrating on mathematical and computational issues, with limited connection to optics, are not suitable for publication in the Journal. Similarly, small technical advances, or papers concerned only with engineering applications or issues of materials science fall outside the journal scope.