Enhancing digital hologram reconstruction using reverse-attention loss for untrained physics-driven deep learning models with uncertain distance

Xiwen Chen, Hao Wang, Zhao Zhang, Zhenmin Li, Huayu Li, Tong Ye, A. Razi
{"title":"Enhancing digital hologram reconstruction using reverse-attention loss for untrained physics-driven deep learning models with uncertain distance","authors":"Xiwen Chen, Hao Wang, Zhao Zhang, Zhenmin Li, Huayu Li, Tong Ye, A. Razi","doi":"10.1117/12.3005570","DOIUrl":null,"url":null,"abstract":"Untrained Physics-based Deep Learning (DL) methods for digital holography have gained significant attention due to their benefits, such as not requiring an annotated training dataset, and providing interpretability since utilizing the governing laws of hologram formation. However, they are sensitive to the hard-to-obtain precise object distance from the imaging plane, posing the $\\textit{Autofocusing}$ challenge. Conventional solutions involve reconstructing image stacks for different potential distances and applying focus metrics to select the best results, which apparently is computationally inefficient. In contrast, recently developed DL-based methods treat it as a supervised task, which again needs annotated data and lacks generalizability. To address this issue, we propose $\\textit{reverse-attention loss}$, a weighted sum of losses for all possible candidates with learnable weights. This is a pioneering approach to addressing the Autofocusing challenge in untrained deep-learning methods. Both theoretical analysis and experiments demonstrate its superiority in efficiency and accuracy. Interestingly, our method presents a significant reconstruction performance over rival methods (i.e. alternating descent-like optimization, non-weighted loss integration, and random distance assignment) and even is almost equal to that achieved with a precisely known object distance. For example, the difference is less than 1dB in PSNR and 0.002 in SSIM for the target sample in our experiment.","PeriodicalId":517856,"journal":{"name":"AI and Optical Data Sciences V","volume":"50 10","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and Optical Data Sciences V","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.3005570","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Untrained Physics-based Deep Learning (DL) methods for digital holography have gained significant attention due to their benefits, such as not requiring an annotated training dataset, and providing interpretability since utilizing the governing laws of hologram formation. However, they are sensitive to the hard-to-obtain precise object distance from the imaging plane, posing the $\textit{Autofocusing}$ challenge. Conventional solutions involve reconstructing image stacks for different potential distances and applying focus metrics to select the best results, which apparently is computationally inefficient. In contrast, recently developed DL-based methods treat it as a supervised task, which again needs annotated data and lacks generalizability. To address this issue, we propose $\textit{reverse-attention loss}$, a weighted sum of losses for all possible candidates with learnable weights. This is a pioneering approach to addressing the Autofocusing challenge in untrained deep-learning methods. Both theoretical analysis and experiments demonstrate its superiority in efficiency and accuracy. Interestingly, our method presents a significant reconstruction performance over rival methods (i.e. alternating descent-like optimization, non-weighted loss integration, and random distance assignment) and even is almost equal to that achieved with a precisely known object distance. For example, the difference is less than 1dB in PSNR and 0.002 in SSIM for the target sample in our experiment.
利用反向注意力损失增强数字全息图重建,适用于距离不确定的未训练物理驱动深度学习模型
用于数字全息摄影的基于物理的非训练深度学习(DL)方法因其无需注释训练数据集、利用全息图形成的管理法则提供可解释性等优点而备受关注。然而,它们对难以获得的物体与成像平面的精确距离很敏感,这就提出了 "自动对焦 "的挑战。传统的解决方案包括重建不同潜在距离的图像堆栈,并应用聚焦度量来选择最佳结果,这显然计算效率低下。相比之下,最近开发的基于 DL 的方法将其视为一项有监督的任务,这同样需要标注数据,而且缺乏通用性。为了解决这个问题,我们提出了$textit{reverse-attention loss}$,即所有可能候选者的损失的加权总和,其权重是可学习的。这是解决未经训练的深度学习方法中自动对焦难题的开创性方法。理论分析和实验都证明了它在效率和准确性方面的优越性。有趣的是,与竞争对手的方法(即交替下降式优化、非加权损失整合和随机距离分配)相比,我们的方法具有显著的重构性能,甚至几乎等同于精确已知物体距离的重构性能。例如,在我们的实验中,目标样本的 PSNR 相差不到 1dB,SSIM 相差不到 0.002。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信