Mohammad Khalid Faizi, Yan Qiang, Yangyang Wei, Ying Qiao, Juanjuan Zhao, Rukhma Aftab, Zia Urrehman
{"title":"基于深度学习的肺癌CT图像分类。","authors":"Mohammad Khalid Faizi, Yan Qiang, Yangyang Wei, Ying Qiao, Juanjuan Zhao, Rukhma Aftab, Zia Urrehman","doi":"10.1186/s12885-025-14320-8","DOIUrl":null,"url":null,"abstract":"<p><p>Lung cancer remains a leading cause of cancer-related deaths worldwide, with accurate classification of lung nodules being critical for early diagnosis. Traditional radiological methods often struggle with high false-positive rates, underscoring the need for advanced diagnostic tools. In this work, we introduce DCSwinB, a novel deep learning-based lung nodule classifier designed to improve the accuracy and efficiency of benign and malignant nodule classification in CT images. Built on the Swin-Tiny Vision Transformer (ViT), DCSwinB incorporates several key innovations: a dual-branch architecture that combines CNNs for local feature extraction and Swin Transformer for global feature extraction, and a Conv-MLP module that enhances connections between adjacent windows to capture long-range dependencies in 3D images. Pretrained on the LUNA16 and LUNA16-K datasets, which consist of annotated CT scans from thousands of patients, DCSwinB was evaluated using ten-fold cross-validation. The model demonstrated superior performance, achieving 90.96% accuracy, 90.56% recall, 89.65% specificity, and an AUC of 0.94, outperforming existing models such as ResNet50 and Swin-T. These results highlight the effectiveness of DCSwinB in enhancing feature representation while optimizing computational efficiency. By improving the accuracy and reliability of lung nodule classification, DCSwinB has the potential to assist radiologists in reducing diagnostic errors, enabling earlier intervention and improved patient outcomes.</p>","PeriodicalId":9131,"journal":{"name":"BMC Cancer","volume":"25 1","pages":"1056"},"PeriodicalIF":3.4000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12210548/pdf/","citationCount":"0","resultStr":"{\"title\":\"Deep learning-based lung cancer classification of CT images.\",\"authors\":\"Mohammad Khalid Faizi, Yan Qiang, Yangyang Wei, Ying Qiao, Juanjuan Zhao, Rukhma Aftab, Zia Urrehman\",\"doi\":\"10.1186/s12885-025-14320-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Lung cancer remains a leading cause of cancer-related deaths worldwide, with accurate classification of lung nodules being critical for early diagnosis. Traditional radiological methods often struggle with high false-positive rates, underscoring the need for advanced diagnostic tools. In this work, we introduce DCSwinB, a novel deep learning-based lung nodule classifier designed to improve the accuracy and efficiency of benign and malignant nodule classification in CT images. Built on the Swin-Tiny Vision Transformer (ViT), DCSwinB incorporates several key innovations: a dual-branch architecture that combines CNNs for local feature extraction and Swin Transformer for global feature extraction, and a Conv-MLP module that enhances connections between adjacent windows to capture long-range dependencies in 3D images. Pretrained on the LUNA16 and LUNA16-K datasets, which consist of annotated CT scans from thousands of patients, DCSwinB was evaluated using ten-fold cross-validation. The model demonstrated superior performance, achieving 90.96% accuracy, 90.56% recall, 89.65% specificity, and an AUC of 0.94, outperforming existing models such as ResNet50 and Swin-T. These results highlight the effectiveness of DCSwinB in enhancing feature representation while optimizing computational efficiency. By improving the accuracy and reliability of lung nodule classification, DCSwinB has the potential to assist radiologists in reducing diagnostic errors, enabling earlier intervention and improved patient outcomes.</p>\",\"PeriodicalId\":9131,\"journal\":{\"name\":\"BMC Cancer\",\"volume\":\"25 1\",\"pages\":\"1056\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12210548/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BMC Cancer\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1186/s12885-025-14320-8\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ONCOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Cancer","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12885-025-14320-8","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ONCOLOGY","Score":null,"Total":0}
Deep learning-based lung cancer classification of CT images.
Lung cancer remains a leading cause of cancer-related deaths worldwide, with accurate classification of lung nodules being critical for early diagnosis. Traditional radiological methods often struggle with high false-positive rates, underscoring the need for advanced diagnostic tools. In this work, we introduce DCSwinB, a novel deep learning-based lung nodule classifier designed to improve the accuracy and efficiency of benign and malignant nodule classification in CT images. Built on the Swin-Tiny Vision Transformer (ViT), DCSwinB incorporates several key innovations: a dual-branch architecture that combines CNNs for local feature extraction and Swin Transformer for global feature extraction, and a Conv-MLP module that enhances connections between adjacent windows to capture long-range dependencies in 3D images. Pretrained on the LUNA16 and LUNA16-K datasets, which consist of annotated CT scans from thousands of patients, DCSwinB was evaluated using ten-fold cross-validation. The model demonstrated superior performance, achieving 90.96% accuracy, 90.56% recall, 89.65% specificity, and an AUC of 0.94, outperforming existing models such as ResNet50 and Swin-T. These results highlight the effectiveness of DCSwinB in enhancing feature representation while optimizing computational efficiency. By improving the accuracy and reliability of lung nodule classification, DCSwinB has the potential to assist radiologists in reducing diagnostic errors, enabling earlier intervention and improved patient outcomes.
期刊介绍:
BMC Cancer is an open access, peer-reviewed journal that considers articles on all aspects of cancer research, including the pathophysiology, prevention, diagnosis and treatment of cancers. The journal welcomes submissions concerning molecular and cellular biology, genetics, epidemiology, and clinical trials.