{"title":"Low-Dose Computed Tomography Image Denoising Vision Transformer Model Optimization Using Space State Method","authors":"Luella Marcos, Paul Babyn, Javad Alirezaie","doi":"10.1002/ima.70220","DOIUrl":null,"url":null,"abstract":"<p>Low-dose computed tomography (LDCT) is widely used to promote reduction of patient radiation exposure, but the associated increase in image noise poses challenges for diagnostic accuracy. In this study, we propose a Vision Transformer (ViT)-based denoising framework enhanced with a State Space Optimizing Block (SSOB) to improve both image quality and computational efficiency. The SSOB upgrades the multihead self-attention mechanism by reducing spatial redundancy and optimizing contextual feature fusion, thereby strengthening the transformer's ability to capture long-range dependencies and preserve fine anatomical structures under severe noise. Extensive evaluations on randomized and categorized datasets demonstrate that the proposed model consistently outperforms existing state-of-the-art denoising approaches. It achieved the highest average SSIM (up to 6.10% improvement), PSNR values (36.51 ± 0.37 dB on randomized and 36.30 ± 0.36 dB on categorized datasets), and the lowest RMSE, surpassing recent CNN-transformer-based denoising hybrid models by approximately 12%. Intensity profile analysis further confirmed its effectiveness, showing sharper edge transitions and more accurate gray-level distributions across anatomical boundaries, closely aligning with ground truth and retaining subtle diagnostic features often lost in competing models. In addition to improved reconstruction quality, the SSOB-empowered ViT achieved notable computational gains. It delivered the fastest inference (0.42 s per image), highest throughput (2.38 images/s), lowest GPU memory usage (750 MB), and smallest model size (7.6 MB), alongside one of the shortest training times (6.5 h). Compared to legacy architectures, which required up to 16 h of training and substantially more resources, the proposed model offers both accuracy and deployability. Collectively, these findings establish the SSOB as a key component for efficient transformer-based LDCT denoising, addressing memory and convergence challenges while preserving global contextual advantages.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 6","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70220","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Imaging Systems and Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ima.70220","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Low-dose computed tomography (LDCT) is widely used to promote reduction of patient radiation exposure, but the associated increase in image noise poses challenges for diagnostic accuracy. In this study, we propose a Vision Transformer (ViT)-based denoising framework enhanced with a State Space Optimizing Block (SSOB) to improve both image quality and computational efficiency. The SSOB upgrades the multihead self-attention mechanism by reducing spatial redundancy and optimizing contextual feature fusion, thereby strengthening the transformer's ability to capture long-range dependencies and preserve fine anatomical structures under severe noise. Extensive evaluations on randomized and categorized datasets demonstrate that the proposed model consistently outperforms existing state-of-the-art denoising approaches. It achieved the highest average SSIM (up to 6.10% improvement), PSNR values (36.51 ± 0.37 dB on randomized and 36.30 ± 0.36 dB on categorized datasets), and the lowest RMSE, surpassing recent CNN-transformer-based denoising hybrid models by approximately 12%. Intensity profile analysis further confirmed its effectiveness, showing sharper edge transitions and more accurate gray-level distributions across anatomical boundaries, closely aligning with ground truth and retaining subtle diagnostic features often lost in competing models. In addition to improved reconstruction quality, the SSOB-empowered ViT achieved notable computational gains. It delivered the fastest inference (0.42 s per image), highest throughput (2.38 images/s), lowest GPU memory usage (750 MB), and smallest model size (7.6 MB), alongside one of the shortest training times (6.5 h). Compared to legacy architectures, which required up to 16 h of training and substantially more resources, the proposed model offers both accuracy and deployability. Collectively, these findings establish the SSOB as a key component for efficient transformer-based LDCT denoising, addressing memory and convergence challenges while preserving global contextual advantages.
期刊介绍:
The International Journal of Imaging Systems and Technology (IMA) is a forum for the exchange of ideas and results relevant to imaging systems, including imaging physics and informatics. The journal covers all imaging modalities in humans and animals.
IMA accepts technically sound and scientifically rigorous research in the interdisciplinary field of imaging, including relevant algorithmic research and hardware and software development, and their applications relevant to medical research. The journal provides a platform to publish original research in structural and functional imaging.
The journal is also open to imaging studies of the human body and on animals that describe novel diagnostic imaging and analyses methods. Technical, theoretical, and clinical research in both normal and clinical populations is encouraged. Submissions describing methods, software, databases, replication studies as well as negative results are also considered.
The scope of the journal includes, but is not limited to, the following in the context of biomedical research:
Imaging and neuro-imaging modalities: structural MRI, functional MRI, PET, SPECT, CT, ultrasound, EEG, MEG, NIRS etc.;
Neuromodulation and brain stimulation techniques such as TMS and tDCS;
Software and hardware for imaging, especially related to human and animal health;
Image segmentation in normal and clinical populations;
Pattern analysis and classification using machine learning techniques;
Computational modeling and analysis;
Brain connectivity and connectomics;
Systems-level characterization of brain function;
Neural networks and neurorobotics;
Computer vision, based on human/animal physiology;
Brain-computer interface (BCI) technology;
Big data, databasing and data mining.