Low-Dose Computed Tomography Image Denoising Vision Transformer Model Optimization Using Space State Method

IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Luella Marcos, Paul Babyn, Javad Alirezaie
{"title":"Low-Dose Computed Tomography Image Denoising Vision Transformer Model Optimization Using Space State Method","authors":"Luella Marcos,&nbsp;Paul Babyn,&nbsp;Javad Alirezaie","doi":"10.1002/ima.70220","DOIUrl":null,"url":null,"abstract":"<p>Low-dose computed tomography (LDCT) is widely used to promote reduction of patient radiation exposure, but the associated increase in image noise poses challenges for diagnostic accuracy. In this study, we propose a Vision Transformer (ViT)-based denoising framework enhanced with a State Space Optimizing Block (SSOB) to improve both image quality and computational efficiency. The SSOB upgrades the multihead self-attention mechanism by reducing spatial redundancy and optimizing contextual feature fusion, thereby strengthening the transformer's ability to capture long-range dependencies and preserve fine anatomical structures under severe noise. Extensive evaluations on randomized and categorized datasets demonstrate that the proposed model consistently outperforms existing state-of-the-art denoising approaches. It achieved the highest average SSIM (up to 6.10% improvement), PSNR values (36.51 ± 0.37 dB on randomized and 36.30 ± 0.36 dB on categorized datasets), and the lowest RMSE, surpassing recent CNN-transformer-based denoising hybrid models by approximately 12%. Intensity profile analysis further confirmed its effectiveness, showing sharper edge transitions and more accurate gray-level distributions across anatomical boundaries, closely aligning with ground truth and retaining subtle diagnostic features often lost in competing models. In addition to improved reconstruction quality, the SSOB-empowered ViT achieved notable computational gains. It delivered the fastest inference (0.42 s per image), highest throughput (2.38 images/s), lowest GPU memory usage (750 MB), and smallest model size (7.6 MB), alongside one of the shortest training times (6.5 h). Compared to legacy architectures, which required up to 16 h of training and substantially more resources, the proposed model offers both accuracy and deployability. Collectively, these findings establish the SSOB as a key component for efficient transformer-based LDCT denoising, addressing memory and convergence challenges while preserving global contextual advantages.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 6","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70220","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Imaging Systems and Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ima.70220","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Low-dose computed tomography (LDCT) is widely used to promote reduction of patient radiation exposure, but the associated increase in image noise poses challenges for diagnostic accuracy. In this study, we propose a Vision Transformer (ViT)-based denoising framework enhanced with a State Space Optimizing Block (SSOB) to improve both image quality and computational efficiency. The SSOB upgrades the multihead self-attention mechanism by reducing spatial redundancy and optimizing contextual feature fusion, thereby strengthening the transformer's ability to capture long-range dependencies and preserve fine anatomical structures under severe noise. Extensive evaluations on randomized and categorized datasets demonstrate that the proposed model consistently outperforms existing state-of-the-art denoising approaches. It achieved the highest average SSIM (up to 6.10% improvement), PSNR values (36.51 ± 0.37 dB on randomized and 36.30 ± 0.36 dB on categorized datasets), and the lowest RMSE, surpassing recent CNN-transformer-based denoising hybrid models by approximately 12%. Intensity profile analysis further confirmed its effectiveness, showing sharper edge transitions and more accurate gray-level distributions across anatomical boundaries, closely aligning with ground truth and retaining subtle diagnostic features often lost in competing models. In addition to improved reconstruction quality, the SSOB-empowered ViT achieved notable computational gains. It delivered the fastest inference (0.42 s per image), highest throughput (2.38 images/s), lowest GPU memory usage (750 MB), and smallest model size (7.6 MB), alongside one of the shortest training times (6.5 h). Compared to legacy architectures, which required up to 16 h of training and substantially more resources, the proposed model offers both accuracy and deployability. Collectively, these findings establish the SSOB as a key component for efficient transformer-based LDCT denoising, addressing memory and convergence challenges while preserving global contextual advantages.

Abstract Image

基于空间状态法的低剂量ct图像去噪视觉变压器模型优化
低剂量计算机断层扫描(LDCT)被广泛用于减少患者的辐射暴露,但相关的图像噪声增加对诊断准确性提出了挑战。在这项研究中,我们提出了一种基于视觉变压器(ViT)的去噪框架,并通过状态空间优化块(SSOB)进行增强,以提高图像质量和计算效率。SSOB通过减少空间冗余和优化上下文特征融合来升级多头自注意机制,从而增强变压器捕获远程依赖关系的能力,并在严重噪声下保持良好的解剖结构。对随机和分类数据集的广泛评估表明,所提出的模型始终优于现有的最先进的去噪方法。它实现了最高的平均SSIM(提高了6.10%),PSNR值(随机数据集为36.51±0.37 dB,分类数据集为36.30±0.36 dB)和最低的RMSE,比最近基于cnn -变压器的去噪混合模型提高了约12%。强度剖面分析进一步证实了其有效性,显示出更清晰的边缘过渡和更准确的跨解剖边界的灰度分布,与基础事实紧密一致,并保留了竞争模型中经常丢失的微妙诊断特征。除了提高重建质量外,ssob支持的ViT还获得了显著的计算增益。它提供了最快的推理(每张图像0.42秒),最高的吞吐量(2.38张图像/秒),最低的GPU内存使用(750 MB)和最小的模型大小(7.6 MB),以及最短的训练时间(6.5小时)之一。与需要长达16小时的培训和更多资源的遗留体系结构相比,所提出的模型提供了准确性和可部署性。总的来说,这些发现表明SSOB是基于变压器的高效LDCT去噪的关键组件,在保持全局上下文优势的同时解决了记忆和收敛问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Imaging Systems and Technology
International Journal of Imaging Systems and Technology 工程技术-成像科学与照相技术
CiteScore
6.90
自引率
6.10%
发文量
138
审稿时长
3 months
期刊介绍: The International Journal of Imaging Systems and Technology (IMA) is a forum for the exchange of ideas and results relevant to imaging systems, including imaging physics and informatics. The journal covers all imaging modalities in humans and animals. IMA accepts technically sound and scientifically rigorous research in the interdisciplinary field of imaging, including relevant algorithmic research and hardware and software development, and their applications relevant to medical research. The journal provides a platform to publish original research in structural and functional imaging. The journal is also open to imaging studies of the human body and on animals that describe novel diagnostic imaging and analyses methods. Technical, theoretical, and clinical research in both normal and clinical populations is encouraged. Submissions describing methods, software, databases, replication studies as well as negative results are also considered. The scope of the journal includes, but is not limited to, the following in the context of biomedical research: Imaging and neuro-imaging modalities: structural MRI, functional MRI, PET, SPECT, CT, ultrasound, EEG, MEG, NIRS etc.; Neuromodulation and brain stimulation techniques such as TMS and tDCS; Software and hardware for imaging, especially related to human and animal health; Image segmentation in normal and clinical populations; Pattern analysis and classification using machine learning techniques; Computational modeling and analysis; Brain connectivity and connectomics; Systems-level characterization of brain function; Neural networks and neurorobotics; Computer vision, based on human/animal physiology; Brain-computer interface (BCI) technology; Big data, databasing and data mining.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信