Low-dose CT reconstruction using cross-domain deep learning with domain transfer module.

IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL
Yoseob Han
{"title":"Low-dose CT reconstruction using cross-domain deep learning with domain transfer module.","authors":"Yoseob Han","doi":"10.1088/1361-6560/adb932","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objective</i>. X-ray computed tomography employing low-dose x-ray source is actively researched to reduce radiation exposure. However, the reduced photon count in low-dose x-ray sources leads to severe noise artifacts in analytic reconstruction methods like filtered backprojection. Recently, deep learning (DL)-based approaches employing uni-domain networks, either in the image-domain or projection-domain, have demonstrated remarkable effectiveness in reducing image noise and Poisson noise caused by low-dose x-ray source. Furthermore, dual-domain networks that integrate image-domain and projection-domain networks are being developed to surpass the performance of uni-domain networks. Despite this advancement, dual-domain networks require twice the computational resources of uni-domain networks, even though their underlying network architectures are not substantially different.<i>Approach</i>. The U-Net architecture, a type of Hourglass network, comprises encoder and decoder modules. The encoder extracts meaningful representations from the input data, while the decoder uses these representations to reconstruct the target data. In dual-domain networks, however, encoders and decoders are redundantly utilized due to the sequential use of two networks, leading to increased computational demands. To address this issue, this study proposes a cross-domain DL approach that leverages analytical domain transfer functions. These functions enable the transfer of features extracted by an encoder trained in input domain to target domain, thereby reducing redundant computations. The target data is then reconstructed using a decoder trained in the corresponding domain, optimizing resource efficiency without compromising performance.<i>Main results</i>. The proposed cross-domain network, comprising a projection-domain encoder and an image-domain decoder, demonstrated effective performance by leveraging the domain transfer function, achieving comparable results with only half the trainable parameters of dual-domain networks. Moreover, the proposed method outperformed conventional iterative reconstruction techniques and existing DL approaches in reconstruction quality.<i>Significance</i>. The proposed network leverages the transfer function to bypass redundant encoder and decoder modules, enabling direct connections between different domains. This approach not only surpasses the performance of dual-domain networks but also significantly reduces the number of required parameters. By facilitating the transfer of primal representations across domains, the method achieves synergistic effects, delivering high quality reconstruction images with reduced radiation doses.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physics in medicine and biology","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1088/1361-6560/adb932","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Objective. X-ray computed tomography employing low-dose x-ray source is actively researched to reduce radiation exposure. However, the reduced photon count in low-dose x-ray sources leads to severe noise artifacts in analytic reconstruction methods like filtered backprojection. Recently, deep learning (DL)-based approaches employing uni-domain networks, either in the image-domain or projection-domain, have demonstrated remarkable effectiveness in reducing image noise and Poisson noise caused by low-dose x-ray source. Furthermore, dual-domain networks that integrate image-domain and projection-domain networks are being developed to surpass the performance of uni-domain networks. Despite this advancement, dual-domain networks require twice the computational resources of uni-domain networks, even though their underlying network architectures are not substantially different.Approach. The U-Net architecture, a type of Hourglass network, comprises encoder and decoder modules. The encoder extracts meaningful representations from the input data, while the decoder uses these representations to reconstruct the target data. In dual-domain networks, however, encoders and decoders are redundantly utilized due to the sequential use of two networks, leading to increased computational demands. To address this issue, this study proposes a cross-domain DL approach that leverages analytical domain transfer functions. These functions enable the transfer of features extracted by an encoder trained in input domain to target domain, thereby reducing redundant computations. The target data is then reconstructed using a decoder trained in the corresponding domain, optimizing resource efficiency without compromising performance.Main results. The proposed cross-domain network, comprising a projection-domain encoder and an image-domain decoder, demonstrated effective performance by leveraging the domain transfer function, achieving comparable results with only half the trainable parameters of dual-domain networks. Moreover, the proposed method outperformed conventional iterative reconstruction techniques and existing DL approaches in reconstruction quality.Significance. The proposed network leverages the transfer function to bypass redundant encoder and decoder modules, enabling direct connections between different domains. This approach not only surpasses the performance of dual-domain networks but also significantly reduces the number of required parameters. By facilitating the transfer of primal representations across domains, the method achieves synergistic effects, delivering high quality reconstruction images with reduced radiation doses.

基于域迁移模块的跨域深度学习低剂量CT重建。
目的:积极研究利用低剂量x射线源进行x射线计算机断层扫描以减少辐射暴露。然而,在低剂量x射线源中光子计数的减少导致了滤波反投影等解析重建方法中严重的噪声伪影。近年来,基于深度学习的单域网络在图像域和投影域的图像噪声和泊松噪声的降噪方面取得了显著的效果。此外,集成图像域和投影域网络的双域网络正在发展,以超越单域网络的性能。尽管有这种进步,双域网络需要的计算资源是单域网络的两倍,即使它们的底层网络架构并没有本质上的不同。方法:U-Net架构是沙漏网络的一种,包括编码器和解码器模块。编码器从输入数据中提取有意义的表示,而解码器使用这些表示来重建目标数据。然而,在双域网络中,由于连续使用两个网络,编码器和解码器被冗余地使用,导致计算需求增加。为了解决这个问题,本研究提出了一种利用分析域传递函数的跨域深度学习方法。这些函数能够将在输入域训练的编码器提取的特征转移到目标域,从而减少冗余计算。然后使用在相应领域训练的解码器重建目标数据,在不影响性能的情况下优化资源效率。 ;主要结果:所提出的跨域网络,包括投影域编码器和图像域解码器,通过利用域传递函数展示了有效的性能,仅使用双域网络的一半可训练参数就获得了相当的结果。此外,所提出的方法在重建质量上优于传统的迭代重建技术和现有的深度学习方法。意义:所提出的网络利用传递函数绕过冗余的编码器和解码器模块,实现不同域之间的直接连接。这种方法不仅超越了双域网络的性能,而且大大减少了所需参数的数量。通过促进跨域原始表征的转移,该方法实现了协同效应,以减少辐射剂量提供高质量的重建图像。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Physics in medicine and biology
Physics in medicine and biology 医学-工程:生物医学
CiteScore
6.50
自引率
14.30%
发文量
409
审稿时长
2 months
期刊介绍: The development and application of theoretical, computational and experimental physics to medicine, physiology and biology. Topics covered are: therapy physics (including ionizing and non-ionizing radiation); biomedical imaging (e.g. x-ray, magnetic resonance, ultrasound, optical and nuclear imaging); image-guided interventions; image reconstruction and analysis (including kinetic modelling); artificial intelligence in biomedical physics and analysis; nanoparticles in imaging and therapy; radiobiology; radiation protection and patient dose monitoring; radiation dosimetry
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信