{"title":"Predicting pathological subtypes of pure ground-glass nodules using Swin Transformer deep learning model.","authors":"Yanhua Wen, Menna Allah Mahmoud, Wensheng Wu, Huicong Chen, Yingying Zhang, Xiaohuan Pan, Yubao Guan","doi":"10.1186/s13244-025-02113-3","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>To explore the diagnostic value of a multi-classification model based on deep learning in distinguishing the pathological subtypes of lung adenocarcinoma or glandular prodromal lesions with pure ground-glass nodules (pGGN) on CT.</p><p><strong>Materials and methods: </strong>A total of 590 cases of pGGN confirmed by pathology as lung adenocarcinoma or glandular prodromal lesions were collected retrospectively, of which 462 cases of pGGN were used as training and testing set, and 128 cases of pGGN as external verification set. The research is based on the Swin Transformer network and uses a five-fold cross-validation method to train the model. The diagnostic efficacy of deep learning model and radiologist on the external verification set was compared. The classification efficiency of the model is evaluated by confusion matrix, accuracy, precision and F1-score.</p><p><strong>Results: </strong>The accuracy of the training and testing sets of the deep learning model is 95.21% and 91.41% respectively, and the integration accuracy is 94.65%. The accuracy, precision and recall rate of the optimal model are 87.01%, 87.57% and 87.01% respectively, and the F1-score is 87.09%. In the external verification set, the accuracy of the model is 91.41%, and the F1-score is 91.42%. The classification efficiency of the deep learning model is better than that of radiologists.</p><p><strong>Conclusion: </strong>The multi-classification model based on deep learning has a good ability to predict the pathological subtypes of lung adenocarcinoma or glandular prodromal lesions with pGGN, and its classification efficiency is better than that of radiologists, which can improve the diagnostic accuracy of pulmonary pGGN.</p><p><strong>Critical relevance statement: </strong>Swin Transformer deep learning models can noninvasively predict the pathological subtypes of pGGN, which can be used as a preoperative auxiliary diagnostic tool to improve the diagnostic accuracy of pGGN, thereby optimizing the prognosis of patients.</p><p><strong>Key points: </strong>The Swin Transformer model can predict the pathological subtype of pure ground-glass nodules. Compared with the performance of radiologists, the deep learning model performs better. Swin Transformer model can be used as a tool for preoperative diagnosis.</p>","PeriodicalId":13639,"journal":{"name":"Insights into Imaging","volume":"16 1","pages":"223"},"PeriodicalIF":4.5000,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12534675/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Insights into Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s13244-025-02113-3","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Objectives: To explore the diagnostic value of a multi-classification model based on deep learning in distinguishing the pathological subtypes of lung adenocarcinoma or glandular prodromal lesions with pure ground-glass nodules (pGGN) on CT.
Materials and methods: A total of 590 cases of pGGN confirmed by pathology as lung adenocarcinoma or glandular prodromal lesions were collected retrospectively, of which 462 cases of pGGN were used as training and testing set, and 128 cases of pGGN as external verification set. The research is based on the Swin Transformer network and uses a five-fold cross-validation method to train the model. The diagnostic efficacy of deep learning model and radiologist on the external verification set was compared. The classification efficiency of the model is evaluated by confusion matrix, accuracy, precision and F1-score.
Results: The accuracy of the training and testing sets of the deep learning model is 95.21% and 91.41% respectively, and the integration accuracy is 94.65%. The accuracy, precision and recall rate of the optimal model are 87.01%, 87.57% and 87.01% respectively, and the F1-score is 87.09%. In the external verification set, the accuracy of the model is 91.41%, and the F1-score is 91.42%. The classification efficiency of the deep learning model is better than that of radiologists.
Conclusion: The multi-classification model based on deep learning has a good ability to predict the pathological subtypes of lung adenocarcinoma or glandular prodromal lesions with pGGN, and its classification efficiency is better than that of radiologists, which can improve the diagnostic accuracy of pulmonary pGGN.
Critical relevance statement: Swin Transformer deep learning models can noninvasively predict the pathological subtypes of pGGN, which can be used as a preoperative auxiliary diagnostic tool to improve the diagnostic accuracy of pGGN, thereby optimizing the prognosis of patients.
Key points: The Swin Transformer model can predict the pathological subtype of pure ground-glass nodules. Compared with the performance of radiologists, the deep learning model performs better. Swin Transformer model can be used as a tool for preoperative diagnosis.
期刊介绍:
Insights into Imaging (I³) is a peer-reviewed open access journal published under the brand SpringerOpen. All content published in the journal is freely available online to anyone, anywhere!
I³ continuously updates scientific knowledge and progress in best-practice standards in radiology through the publication of original articles and state-of-the-art reviews and opinions, along with recommendations and statements from the leading radiological societies in Europe.
Founded by the European Society of Radiology (ESR), I³ creates a platform for educational material, guidelines and recommendations, and a forum for topics of controversy.
A balanced combination of review articles, original papers, short communications from European radiological congresses and information on society matters makes I³ an indispensable source for current information in this field.
I³ is owned by the ESR, however authors retain copyright to their article according to the Creative Commons Attribution License (see Copyright and License Agreement). All articles can be read, redistributed and reused for free, as long as the author of the original work is cited properly.
The open access fees (article-processing charges) for this journal are kindly sponsored by ESR for all Members.
The journal went open access in 2012, which means that all articles published since then are freely available online.