{"title":"Multi-scale feature adaptive aggregation Transformer for super-resolution of lung computed tomography images","authors":"Yanmei Li, Qibin Yang, Fen Zhao, Jingshi Deng, Quanhao Ren, Yulong Pan","doi":"10.1016/j.bspc.2025.108126","DOIUrl":null,"url":null,"abstract":"<div><div>High-resolution computed tomography (CT) images help doctors diagnose lung diseases by providing detailed information about underlying pathology. However, most current super-resolution methods still face the following problems: (1) Insufficient performance in restoring fine structure and high-frequency details of local edges, resulting in blurring of the reconstructed CT images. (2) These models are usually complex in structure and have a large number of parameters, which is both inefficient and requires additional computational resources. To address these issues, we propose an efficient Transformer model for super-resolution of lung CT images, named MFAT. Specifically, we propose a multi-scale feature adaptive aggregation strategy (MSAS) that splits features into multiple scales and uses independent computation at each scale to learn the corresponding feature representations while extracting image features in different receptive fields to enhance the fusion between the multi-level information. Additionally, we propose hybrid channel local window attention, which combines local context information and channel mixing to improve image texture expression and enhance detail clarity in reconstructed CT images. Finally, we design parameter-free attention mechanisms that utilize edge operators and multi-scale weighting to enhance highly contributing information and suppress redundant information, while also balancing the number of parameters. Extensive experiments on the COVID-CT dataset demonstrate that MFAT achieves a PSNR of 35.61 dB and 33.34 dB, and an SSIM of 0.9139 and 0.8706 at scale factors of ×3 and ×4, respectively, outperforming state-of-the-art methods. These results show that our method excels at reconstructing high-resolution lung CT images and recovering sharper image details.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"110 ","pages":"Article 108126"},"PeriodicalIF":4.9000,"publicationDate":"2025-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1746809425006378","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
High-resolution computed tomography (CT) images help doctors diagnose lung diseases by providing detailed information about underlying pathology. However, most current super-resolution methods still face the following problems: (1) Insufficient performance in restoring fine structure and high-frequency details of local edges, resulting in blurring of the reconstructed CT images. (2) These models are usually complex in structure and have a large number of parameters, which is both inefficient and requires additional computational resources. To address these issues, we propose an efficient Transformer model for super-resolution of lung CT images, named MFAT. Specifically, we propose a multi-scale feature adaptive aggregation strategy (MSAS) that splits features into multiple scales and uses independent computation at each scale to learn the corresponding feature representations while extracting image features in different receptive fields to enhance the fusion between the multi-level information. Additionally, we propose hybrid channel local window attention, which combines local context information and channel mixing to improve image texture expression and enhance detail clarity in reconstructed CT images. Finally, we design parameter-free attention mechanisms that utilize edge operators and multi-scale weighting to enhance highly contributing information and suppress redundant information, while also balancing the number of parameters. Extensive experiments on the COVID-CT dataset demonstrate that MFAT achieves a PSNR of 35.61 dB and 33.34 dB, and an SSIM of 0.9139 and 0.8706 at scale factors of ×3 and ×4, respectively, outperforming state-of-the-art methods. These results show that our method excels at reconstructing high-resolution lung CT images and recovering sharper image details.
期刊介绍:
Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management.
Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.