Predicting pathological subtypes of pure ground-glass nodules using Swin Transformer deep learning model.

IF 4.5 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Yanhua Wen, Menna Allah Mahmoud, Wensheng Wu, Huicong Chen, Yingying Zhang, Xiaohuan Pan, Yubao Guan
{"title":"Predicting pathological subtypes of pure ground-glass nodules using Swin Transformer deep learning model.","authors":"Yanhua Wen, Menna Allah Mahmoud, Wensheng Wu, Huicong Chen, Yingying Zhang, Xiaohuan Pan, Yubao Guan","doi":"10.1186/s13244-025-02113-3","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>To explore the diagnostic value of a multi-classification model based on deep learning in distinguishing the pathological subtypes of lung adenocarcinoma or glandular prodromal lesions with pure ground-glass nodules (pGGN) on CT.</p><p><strong>Materials and methods: </strong>A total of 590 cases of pGGN confirmed by pathology as lung adenocarcinoma or glandular prodromal lesions were collected retrospectively, of which 462 cases of pGGN were used as training and testing set, and 128 cases of pGGN as external verification set. The research is based on the Swin Transformer network and uses a five-fold cross-validation method to train the model. The diagnostic efficacy of deep learning model and radiologist on the external verification set was compared. The classification efficiency of the model is evaluated by confusion matrix, accuracy, precision and F1-score.</p><p><strong>Results: </strong>The accuracy of the training and testing sets of the deep learning model is 95.21% and 91.41% respectively, and the integration accuracy is 94.65%. The accuracy, precision and recall rate of the optimal model are 87.01%, 87.57% and 87.01% respectively, and the F1-score is 87.09%. In the external verification set, the accuracy of the model is 91.41%, and the F1-score is 91.42%. The classification efficiency of the deep learning model is better than that of radiologists.</p><p><strong>Conclusion: </strong>The multi-classification model based on deep learning has a good ability to predict the pathological subtypes of lung adenocarcinoma or glandular prodromal lesions with pGGN, and its classification efficiency is better than that of radiologists, which can improve the diagnostic accuracy of pulmonary pGGN.</p><p><strong>Critical relevance statement: </strong>Swin Transformer deep learning models can noninvasively predict the pathological subtypes of pGGN, which can be used as a preoperative auxiliary diagnostic tool to improve the diagnostic accuracy of pGGN, thereby optimizing the prognosis of patients.</p><p><strong>Key points: </strong>The Swin Transformer model can predict the pathological subtype of pure ground-glass nodules. Compared with the performance of radiologists, the deep learning model performs better. Swin Transformer model can be used as a tool for preoperative diagnosis.</p>","PeriodicalId":13639,"journal":{"name":"Insights into Imaging","volume":"16 1","pages":"223"},"PeriodicalIF":4.5000,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12534675/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Insights into Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s13244-025-02113-3","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Objectives: To explore the diagnostic value of a multi-classification model based on deep learning in distinguishing the pathological subtypes of lung adenocarcinoma or glandular prodromal lesions with pure ground-glass nodules (pGGN) on CT.

Materials and methods: A total of 590 cases of pGGN confirmed by pathology as lung adenocarcinoma or glandular prodromal lesions were collected retrospectively, of which 462 cases of pGGN were used as training and testing set, and 128 cases of pGGN as external verification set. The research is based on the Swin Transformer network and uses a five-fold cross-validation method to train the model. The diagnostic efficacy of deep learning model and radiologist on the external verification set was compared. The classification efficiency of the model is evaluated by confusion matrix, accuracy, precision and F1-score.

Results: The accuracy of the training and testing sets of the deep learning model is 95.21% and 91.41% respectively, and the integration accuracy is 94.65%. The accuracy, precision and recall rate of the optimal model are 87.01%, 87.57% and 87.01% respectively, and the F1-score is 87.09%. In the external verification set, the accuracy of the model is 91.41%, and the F1-score is 91.42%. The classification efficiency of the deep learning model is better than that of radiologists.

Conclusion: The multi-classification model based on deep learning has a good ability to predict the pathological subtypes of lung adenocarcinoma or glandular prodromal lesions with pGGN, and its classification efficiency is better than that of radiologists, which can improve the diagnostic accuracy of pulmonary pGGN.

Critical relevance statement: Swin Transformer deep learning models can noninvasively predict the pathological subtypes of pGGN, which can be used as a preoperative auxiliary diagnostic tool to improve the diagnostic accuracy of pGGN, thereby optimizing the prognosis of patients.

Key points: The Swin Transformer model can predict the pathological subtype of pure ground-glass nodules. Compared with the performance of radiologists, the deep learning model performs better. Swin Transformer model can be used as a tool for preoperative diagnosis.

利用Swin Transformer深度学习模型预测纯磨玻璃结节病理亚型。
目的:探讨基于深度学习的多分类模型在肺腺癌或腺前驱病变伴纯磨玻璃结节(pGGN)病理亚型CT鉴别中的诊断价值。材料与方法:回顾性收集经病理证实为肺腺癌或腺前驱病变的pGGN 590例,其中462例pGGN作为训练测试集,128例pGGN作为外部验证集。该研究基于Swin变压器网络,并使用五倍交叉验证方法来训练模型。比较了深度学习模型和放射科医生在外部验证集上的诊断效果。通过混淆矩阵、准确率、精密度和f1评分来评价模型的分类效率。结果:深度学习模型的训练集和测试集准确率分别为95.21%和91.41%,整合准确率为94.65%。最优模型的正确率、精密度和召回率分别为87.01%、87.57%和87.01%,f1得分为87.09%。在外部验证集中,模型的准确率为91.41%,f1得分为91.42%。深度学习模型的分类效率优于放射科医生的分类效率。结论:基于深度学习的多分类模型对肺腺癌或腺前驱病变伴pGGN的病理亚型预测能力较好,分类效率优于放射科医师,可提高肺部pGGN的诊断准确率。关键相关性声明:Swin Transformer深度学习模型可无创预测pGGN的病理亚型,可作为术前辅助诊断工具,提高pGGN的诊断准确性,从而优化患者预后。重点:Swin Transformer模型可以预测纯磨玻璃结节的病理亚型。与放射科医生的表现相比,深度学习模型的表现更好。Swin Transformer模型可作为术前诊断的工具。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Insights into Imaging
Insights into Imaging Medicine-Radiology, Nuclear Medicine and Imaging
CiteScore
7.30
自引率
4.30%
发文量
182
审稿时长
13 weeks
期刊介绍: Insights into Imaging (I³) is a peer-reviewed open access journal published under the brand SpringerOpen. All content published in the journal is freely available online to anyone, anywhere! I³ continuously updates scientific knowledge and progress in best-practice standards in radiology through the publication of original articles and state-of-the-art reviews and opinions, along with recommendations and statements from the leading radiological societies in Europe. Founded by the European Society of Radiology (ESR), I³ creates a platform for educational material, guidelines and recommendations, and a forum for topics of controversy. A balanced combination of review articles, original papers, short communications from European radiological congresses and information on society matters makes I³ an indispensable source for current information in this field. I³ is owned by the ESR, however authors retain copyright to their article according to the Creative Commons Attribution License (see Copyright and License Agreement). All articles can be read, redistributed and reused for free, as long as the author of the original work is cited properly. The open access fees (article-processing charges) for this journal are kindly sponsored by ESR for all Members. The journal went open access in 2012, which means that all articles published since then are freely available online.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信