A dense and U-shaped transformer with dual-domain multi-loss function for sparse-view CT reconstruction.

IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION
Peng Liu, Chenyun Fang, Zhiwei Qiao
{"title":"A dense and U-shaped transformer with dual-domain multi-loss function for sparse-view CT reconstruction.","authors":"Peng Liu, Chenyun Fang, Zhiwei Qiao","doi":"10.3233/XST-230184","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>CT image reconstruction from sparse-view projections is an important imaging configuration for low-dose CT, as it can reduce radiation dose. However, the CT images reconstructed from sparse-view projections by traditional analytic algorithms suffer from severe sparse artifacts. Therefore, it is of great value to develop advanced methods to suppress these artifacts. In this work, we aim to use a deep learning (DL)-based method to suppress sparse artifacts.</p><p><strong>Methods: </strong>Inspired by the good performance of DenseNet and Transformer architecture in computer vision tasks, we propose a Dense U-shaped Transformer (D-U-Transformer) to suppress sparse artifacts. This architecture exploits the advantages of densely connected convolutions in capturing local context and Transformer in modelling long-range dependencies, and applies channel attention to fusion features. Moreover, we design a dual-domain multi-loss function with learned weights for the optimization of the model to further improve image quality.</p><p><strong>Results: </strong>Experimental results of our proposed D-U-Transformer yield performance improvements on the well-known Mayo Clinic LDCT dataset over several representative DL-based models in terms of artifact suppression and image feature preservation. Extensive internal ablation experiments demonstrate the effectiveness of the components in the proposed model for sparse-view computed tomography (SVCT) reconstruction.</p><p><strong>Significance: </strong>The proposed method can effectively suppress sparse artifacts and achieve high-precision SVCT reconstruction, thus promoting clinical CT scanning towards low-dose radiation and high-quality imaging. The findings of this work can be applied to denoising and artifact removal tasks in CT and other medical images.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"207-228"},"PeriodicalIF":1.7000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of X-Ray Science and Technology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3233/XST-230184","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"INSTRUMENTS & INSTRUMENTATION","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: CT image reconstruction from sparse-view projections is an important imaging configuration for low-dose CT, as it can reduce radiation dose. However, the CT images reconstructed from sparse-view projections by traditional analytic algorithms suffer from severe sparse artifacts. Therefore, it is of great value to develop advanced methods to suppress these artifacts. In this work, we aim to use a deep learning (DL)-based method to suppress sparse artifacts.

Methods: Inspired by the good performance of DenseNet and Transformer architecture in computer vision tasks, we propose a Dense U-shaped Transformer (D-U-Transformer) to suppress sparse artifacts. This architecture exploits the advantages of densely connected convolutions in capturing local context and Transformer in modelling long-range dependencies, and applies channel attention to fusion features. Moreover, we design a dual-domain multi-loss function with learned weights for the optimization of the model to further improve image quality.

Results: Experimental results of our proposed D-U-Transformer yield performance improvements on the well-known Mayo Clinic LDCT dataset over several representative DL-based models in terms of artifact suppression and image feature preservation. Extensive internal ablation experiments demonstrate the effectiveness of the components in the proposed model for sparse-view computed tomography (SVCT) reconstruction.

Significance: The proposed method can effectively suppress sparse artifacts and achieve high-precision SVCT reconstruction, thus promoting clinical CT scanning towards low-dose radiation and high-quality imaging. The findings of this work can be applied to denoising and artifact removal tasks in CT and other medical images.

用于稀疏视图 CT 重建的具有双域多损耗函数的密集 U 型变压器
目的:根据稀疏视图投影重建 CT 图像是低剂量 CT 的一种重要成像配置,因为它可以减少辐射剂量。然而,通过传统分析算法从稀疏视图投影重建的 CT 图像存在严重的稀疏伪影。因此,开发先进的方法来抑制这些伪影具有重要价值。在这项工作中,我们旨在使用一种基于深度学习(DL)的方法来抑制稀疏伪影:受 DenseNet 和 Transformer 架构在计算机视觉任务中良好表现的启发,我们提出了一种密集 U 形变换器(D-U-Transformer)来抑制稀疏伪影。该架构利用了密集连接卷积在捕捉局部上下文方面的优势和 Transformer 在模拟长程依赖关系方面的优势,并将通道注意力应用于融合特征。此外,我们还设计了带有学习权重的双域多损失函数,用于优化模型,以进一步提高图像质量:结果:我们提出的 D-U-Transformer 在著名的 Mayo Clinic LDCT 数据集上的实验结果表明,在伪影抑制和图像特征保留方面,其性能比基于 DL 的几个代表性模型都有所提高。广泛的内部消融实验证明了所提模型中的组件在稀疏视图计算机断层扫描(SVCT)重建中的有效性:意义:所提出的方法能有效抑制稀疏伪影,实现高精度 SVCT 重建,从而推动临床 CT 扫描向低剂量辐射和高质量成像方向发展。该研究成果可应用于 CT 和其他医学图像的去噪和消除伪影任务。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.90
自引率
23.30%
发文量
150
审稿时长
3 months
期刊介绍: Research areas within the scope of the journal include: Interaction of x-rays with matter: x-ray phenomena, biological effects of radiation, radiation safety and optical constants X-ray sources: x-rays from synchrotrons, x-ray lasers, plasmas, and other sources, conventional or unconventional Optical elements: grazing incidence optics, multilayer mirrors, zone plates, gratings, other diffraction optics Optical instruments: interferometers, spectrometers, microscopes, telescopes, microprobes
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信