PtychoDV: Vision Transformer-Based Deep Unrolling Network for Ptychographic Image Reconstruction

IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Weijie Gan;Qiuchen Zhai;Michael T. McCann;Cristina Garcia Cardona;Ulugbek S. Kamilov;Brendt Wohlberg
{"title":"PtychoDV: Vision Transformer-Based Deep Unrolling Network for Ptychographic Image Reconstruction","authors":"Weijie Gan;Qiuchen Zhai;Michael T. McCann;Cristina Garcia Cardona;Ulugbek S. Kamilov;Brendt Wohlberg","doi":"10.1109/OJSP.2024.3375276","DOIUrl":null,"url":null,"abstract":"Ptychography is an imaging technique that captures multiple overlapping snapshots of a sample, illuminated coherently by a moving localized probe. The image recovery from ptychographic data is generally achieved via an iterative algorithm that solves a nonlinear phase retrieval problem derived from measured diffraction patterns. However, these iterative approaches have high computational cost. In this paper, we introduce PtychoDV, a novel deep model-based network designed for efficient, high-quality ptychographic image reconstruction. PtychoDV comprises a vision transformer that generates an initial image from the set of raw measurements, taking into consideration their mutual correlations. This is followed by a deep unrolling network that refines the initial image using learnable convolutional priors and the ptychography measurement model. Experimental results on simulated data demonstrate that PtychoDV is capable of outperforming existing deep learning methods for this problem, and significantly reduces computational cost compared to iterative methodologies, while maintaining competitive performance.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"5 ","pages":"539-547"},"PeriodicalIF":2.9000,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10463649","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE open journal of signal processing","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10463649/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Ptychography is an imaging technique that captures multiple overlapping snapshots of a sample, illuminated coherently by a moving localized probe. The image recovery from ptychographic data is generally achieved via an iterative algorithm that solves a nonlinear phase retrieval problem derived from measured diffraction patterns. However, these iterative approaches have high computational cost. In this paper, we introduce PtychoDV, a novel deep model-based network designed for efficient, high-quality ptychographic image reconstruction. PtychoDV comprises a vision transformer that generates an initial image from the set of raw measurements, taking into consideration their mutual correlations. This is followed by a deep unrolling network that refines the initial image using learnable convolutional priors and the ptychography measurement model. Experimental results on simulated data demonstrate that PtychoDV is capable of outperforming existing deep learning methods for this problem, and significantly reduces computational cost compared to iterative methodologies, while maintaining competitive performance.
PtychoDV:基于视觉变换器的深度解卷网络,用于双色图像重建
层析成像是一种成像技术,通过移动的局部探针相干照射,捕捉样品的多个重叠快照。通常通过迭代算法来从分层成像数据中恢复图像,该算法解决了从测量衍射图样中得出的非线性相位检索问题。然而,这些迭代方法的计算成本很高。在本文中,我们介绍了 PtychoDV,这是一种基于深度模型的新型网络,专为高效、高质量的梯形图像重建而设计。PtychoDV 包括一个视觉转换器,它能从一组原始测量值生成初始图像,并考虑到它们之间的相互关联。随后,深度卷积网络利用可学习的卷积先验和梯形摄影测量模型完善初始图像。模拟数据的实验结果表明,PtychoDV 能够超越现有的深度学习方法来解决这个问题,与迭代方法相比,它大大降低了计算成本,同时保持了极具竞争力的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.30
自引率
0.00%
发文量
0
审稿时长
22 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信