MVMS-RCN: A Dual-Domain Unified CT Reconstruction With Multi-Sparse-View and Multi-Scale Refinement-Correction

IF 4.2 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Xiaohong Fan;Ke Chen;Huaming Yi;Yin Yang;Jianping Zhang
{"title":"MVMS-RCN: A Dual-Domain Unified CT Reconstruction With Multi-Sparse-View and Multi-Scale Refinement-Correction","authors":"Xiaohong Fan;Ke Chen;Huaming Yi;Yin Yang;Jianping Zhang","doi":"10.1109/TCI.2024.3507645","DOIUrl":null,"url":null,"abstract":"X-ray Computed Tomography (CT) is one of the most important diagnostic imaging techniques in clinical applications. Sparse-view CT imaging reduces the number of projection views to a lower radiation dose and alleviates the potential risk of radiation exposure. Most existing deep learning (DL) and deep unfolding sparse-view CT reconstruction methods: 1) do not fully use the projection data; 2) do not always link their architecture designs to a mathematical theory; 3) do not flexibly deal with multi-sparse-view reconstruction assignments. This paper aims to use mathematical ideas and design optimal DL imaging algorithms for sparse-view CT reconstructions. We propose a novel dual-domain unified framework that offers a great deal of flexibility for multi-sparse-view CT reconstruction through a single model. This framework combines the theoretical advantages of model-based methods with the superior reconstruction performance of DL-based methods, resulting in the expected generalizability of DL. We propose a refinement module that utilizes unfolding projection domain to refine full-sparse-view projection errors, as well as an image domain correction module that distills multi-scale geometric error corrections to reconstruct sparse-view CT. This provides us with a new way to explore the potential of projection information and a new perspective on designing network architectures. The multi-scale geometric correction module is end-to-end learnable, and our method could function as a plug-and-play reconstruction technique, adaptable to various applications. Extensive experiments demonstrate that our framework is superior to other existing state-of-the-art methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1749-1762"},"PeriodicalIF":4.2000,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computational Imaging","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10769006/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

X-ray Computed Tomography (CT) is one of the most important diagnostic imaging techniques in clinical applications. Sparse-view CT imaging reduces the number of projection views to a lower radiation dose and alleviates the potential risk of radiation exposure. Most existing deep learning (DL) and deep unfolding sparse-view CT reconstruction methods: 1) do not fully use the projection data; 2) do not always link their architecture designs to a mathematical theory; 3) do not flexibly deal with multi-sparse-view reconstruction assignments. This paper aims to use mathematical ideas and design optimal DL imaging algorithms for sparse-view CT reconstructions. We propose a novel dual-domain unified framework that offers a great deal of flexibility for multi-sparse-view CT reconstruction through a single model. This framework combines the theoretical advantages of model-based methods with the superior reconstruction performance of DL-based methods, resulting in the expected generalizability of DL. We propose a refinement module that utilizes unfolding projection domain to refine full-sparse-view projection errors, as well as an image domain correction module that distills multi-scale geometric error corrections to reconstruct sparse-view CT. This provides us with a new way to explore the potential of projection information and a new perspective on designing network architectures. The multi-scale geometric correction module is end-to-end learnable, and our method could function as a plug-and-play reconstruction technique, adaptable to various applications. Extensive experiments demonstrate that our framework is superior to other existing state-of-the-art methods.
MVMS-RCN:一种多稀疏视图和多尺度精细校正的双域统一CT重建方法
x射线计算机断层扫描(CT)是临床上最重要的诊断成像技术之一。稀疏视图CT成像减少了投影视图的数量,降低了辐射剂量,减轻了辐射暴露的潜在风险。大多数现有的深度学习(DL)和深度展开稀疏视图CT重建方法:1)没有充分利用投影数据;2)不总是将他们的建筑设计与数学理论联系起来;3)不能灵活处理多稀疏视图重构分配。本文旨在利用数学思想设计最佳的深度学习成像算法用于稀疏视图CT重建。我们提出了一种新的双域统一框架,为通过单一模型进行多稀疏视图CT重建提供了很大的灵活性。该框架结合了基于模型的方法的理论优势和基于DL的方法优越的重建性能,使得DL具有预期的泛化能力。我们提出了一个利用展开投影域来细化全稀疏视图投影误差的细化模块,以及一个提取多尺度几何误差修正来重建稀疏视图CT的图像域校正模块。这为我们探索投影信息的潜力提供了新的途径,也为设计网络架构提供了新的视角。多尺度几何校正模块是端到端可学习的,我们的方法可以作为一种即插即用的重建技术,适用于各种应用。大量的实验表明,我们的框架优于其他现有的最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Computational Imaging
IEEE Transactions on Computational Imaging Mathematics-Computational Mathematics
CiteScore
8.20
自引率
7.40%
发文量
59
期刊介绍: The IEEE Transactions on Computational Imaging will publish articles where computation plays an integral role in the image formation process. Papers will cover all areas of computational imaging ranging from fundamental theoretical methods to the latest innovative computational imaging system designs. Topics of interest will include advanced algorithms and mathematical techniques, model-based data inversion, methods for image and signal recovery from sparse and incomplete data, techniques for non-traditional sensing of image data, methods for dynamic information acquisition and extraction from imaging sensors, software and hardware for efficient computation in imaging systems, and highly novel imaging system design.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信