Ziqi Bai;Xianming Liu;Cheng Guo;Kui Jiang;Junjun Jiang;Xiangyang Ji
{"title":"Diff-Holo: A Residual Diffusion Model With Complex Transformer for Rapid Single-Frame Hologram Reconstruction","authors":"Ziqi Bai;Xianming Liu;Cheng Guo;Kui Jiang;Junjun Jiang;Xiangyang Ji","doi":"10.1109/TCI.2025.3561683","DOIUrl":null,"url":null,"abstract":"Deep learning approaches have gained significant traction in holographic imaging, with diffusion models—an emerging class of deep generative models—showing particular promise in hologram reconstruction. Unlike conventional neural networks that directly generate outputs, diffusion models gradually add noise to data and train neural networks to remove it, enabling them to learn implicit priors of the underlying data distribution. However, current diffusion-based hologram reconstruction methods often require hundreds or even thousands of iterations to achieve high-fidelity results, leading to processing times of several minutes or more—falling short of the fast imaging demands of holographic systems. To address this, we propose <italic>Diff-Holo</i>, a residual diffusion model integrated with a complex transformer, designed for rapid and high-quality single-frame hologram reconstruction. Specifically, we create a shorter and more efficient Markov chain by controlling the residuals between clean images and those degraded by twin-image artifacts. Additionally, we incorporate complex-valued priors into the network by using a complex window-based transformer as the backbone, enhancing the network's ability to process complex-valued data in the reverse reconstruction process. Experimental results demonstrate that Diff-Holo achieves high-quality single-frame reconstructions in as few as 15 sampling steps, reducing reconstruction time from minutes to under 2.2 seconds.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"689-703"},"PeriodicalIF":4.2000,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computational Imaging","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10966195/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning approaches have gained significant traction in holographic imaging, with diffusion models—an emerging class of deep generative models—showing particular promise in hologram reconstruction. Unlike conventional neural networks that directly generate outputs, diffusion models gradually add noise to data and train neural networks to remove it, enabling them to learn implicit priors of the underlying data distribution. However, current diffusion-based hologram reconstruction methods often require hundreds or even thousands of iterations to achieve high-fidelity results, leading to processing times of several minutes or more—falling short of the fast imaging demands of holographic systems. To address this, we propose Diff-Holo, a residual diffusion model integrated with a complex transformer, designed for rapid and high-quality single-frame hologram reconstruction. Specifically, we create a shorter and more efficient Markov chain by controlling the residuals between clean images and those degraded by twin-image artifacts. Additionally, we incorporate complex-valued priors into the network by using a complex window-based transformer as the backbone, enhancing the network's ability to process complex-valued data in the reverse reconstruction process. Experimental results demonstrate that Diff-Holo achieves high-quality single-frame reconstructions in as few as 15 sampling steps, reducing reconstruction time from minutes to under 2.2 seconds.
期刊介绍:
The IEEE Transactions on Computational Imaging will publish articles where computation plays an integral role in the image formation process. Papers will cover all areas of computational imaging ranging from fundamental theoretical methods to the latest innovative computational imaging system designs. Topics of interest will include advanced algorithms and mathematical techniques, model-based data inversion, methods for image and signal recovery from sparse and incomplete data, techniques for non-traditional sensing of image data, methods for dynamic information acquisition and extraction from imaging sensors, software and hardware for efficient computation in imaging systems, and highly novel imaging system design.