Meihua Shao , Jian Wang , Lin Zhu , Jianfei Tu , Guohua Cheng , Linyang He , Hengfeng Shi , Cui Zhang , Hong Yu
{"title":"深度学习在非增强胸部CT磨玻璃结节三级分类中的应用:CNN架构的多中心对比研究","authors":"Meihua Shao , Jian Wang , Lin Zhu , Jianfei Tu , Guohua Cheng , Linyang He , Hengfeng Shi , Cui Zhang , Hong Yu","doi":"10.1016/j.ejro.2025.100690","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><div>To develop, validate, and compare four three-dimensional (3D) convolutional neural network (CNN) models for differentiating ground-glass nodules (GGNs) on non-contrast chest computed tomography (CT) scans, specifically classifying them as adenomatous hyperplasia (AAH)/adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), or invasive adenocarcinoma (IA).</div></div><div><h3>Materials and methods</h3><div>This multi-center study retrospectively enrolled 4284 consecutive patients with surgically resected and pathologically confirmed AAH/AIS, MIA, or IA from four hospitals between January 2015 and December 2023. GGNs were randomly partitioned into a training set (n = 3083, 72 %) and a validation set (n = 1277, 28 %). Four 3D deep learning models (Res2Net 3D, DenseNet3D, ResNet50 3D, Vision Transformer 3D) were implemented for GGN segmentation and three-class classification. Additionally, variants of the Res2Net 3D model were developed by incorporating clinical and CT features: Res2Net 3D_w2 (sex, age), Res2Net 3D_w6 (adding lesion size, location, and smoking history), and Res2Net 3D_w10 (sex, age, location, the mean, maximum, and standard deviation of CT attenuation, nodule volume, volume ratio, volume ratio within the left/right lung, and the maximum CT value of the entire lung). Model performance was evaluated using accuracy, recall, precision, F1-score, and area under the receiver operating characteristic curve (AUC).</div></div><div><h3>Results</h3><div>Res2Net 3D outperformed others, achieving AUCs of 0.91 (AAH/AIS), 0.88 (MIA), and 0.92 (IA). Its F1-scores were 0.416, 0.500, and 0.929, respectively. All Res2Net variants achieved accuracies between 0.83–0.84.</div></div><div><h3>Conclusion</h3><div>The Res2Net 3D model accurately differentiates GGN subtypes using non-contrast CT, showing high performance, especially for invasive adenocarcinoma.</div></div>","PeriodicalId":38076,"journal":{"name":"European Journal of Radiology Open","volume":"15 ","pages":"Article 100690"},"PeriodicalIF":2.9000,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep learning for Three‐Class Classification of ground-glass nodules on non-enhanced chest CT: A multicenter comparative study of CNN architectures\",\"authors\":\"Meihua Shao , Jian Wang , Lin Zhu , Jianfei Tu , Guohua Cheng , Linyang He , Hengfeng Shi , Cui Zhang , Hong Yu\",\"doi\":\"10.1016/j.ejro.2025.100690\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Objective</h3><div>To develop, validate, and compare four three-dimensional (3D) convolutional neural network (CNN) models for differentiating ground-glass nodules (GGNs) on non-contrast chest computed tomography (CT) scans, specifically classifying them as adenomatous hyperplasia (AAH)/adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), or invasive adenocarcinoma (IA).</div></div><div><h3>Materials and methods</h3><div>This multi-center study retrospectively enrolled 4284 consecutive patients with surgically resected and pathologically confirmed AAH/AIS, MIA, or IA from four hospitals between January 2015 and December 2023. GGNs were randomly partitioned into a training set (n = 3083, 72 %) and a validation set (n = 1277, 28 %). Four 3D deep learning models (Res2Net 3D, DenseNet3D, ResNet50 3D, Vision Transformer 3D) were implemented for GGN segmentation and three-class classification. Additionally, variants of the Res2Net 3D model were developed by incorporating clinical and CT features: Res2Net 3D_w2 (sex, age), Res2Net 3D_w6 (adding lesion size, location, and smoking history), and Res2Net 3D_w10 (sex, age, location, the mean, maximum, and standard deviation of CT attenuation, nodule volume, volume ratio, volume ratio within the left/right lung, and the maximum CT value of the entire lung). Model performance was evaluated using accuracy, recall, precision, F1-score, and area under the receiver operating characteristic curve (AUC).</div></div><div><h3>Results</h3><div>Res2Net 3D outperformed others, achieving AUCs of 0.91 (AAH/AIS), 0.88 (MIA), and 0.92 (IA). Its F1-scores were 0.416, 0.500, and 0.929, respectively. All Res2Net variants achieved accuracies between 0.83–0.84.</div></div><div><h3>Conclusion</h3><div>The Res2Net 3D model accurately differentiates GGN subtypes using non-contrast CT, showing high performance, especially for invasive adenocarcinoma.</div></div>\",\"PeriodicalId\":38076,\"journal\":{\"name\":\"European Journal of Radiology Open\",\"volume\":\"15 \",\"pages\":\"Article 100690\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2025-10-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Journal of Radiology Open\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2352047725000577\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Radiology Open","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2352047725000577","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Deep learning for Three‐Class Classification of ground-glass nodules on non-enhanced chest CT: A multicenter comparative study of CNN architectures
Objective
To develop, validate, and compare four three-dimensional (3D) convolutional neural network (CNN) models for differentiating ground-glass nodules (GGNs) on non-contrast chest computed tomography (CT) scans, specifically classifying them as adenomatous hyperplasia (AAH)/adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), or invasive adenocarcinoma (IA).
Materials and methods
This multi-center study retrospectively enrolled 4284 consecutive patients with surgically resected and pathologically confirmed AAH/AIS, MIA, or IA from four hospitals between January 2015 and December 2023. GGNs were randomly partitioned into a training set (n = 3083, 72 %) and a validation set (n = 1277, 28 %). Four 3D deep learning models (Res2Net 3D, DenseNet3D, ResNet50 3D, Vision Transformer 3D) were implemented for GGN segmentation and three-class classification. Additionally, variants of the Res2Net 3D model were developed by incorporating clinical and CT features: Res2Net 3D_w2 (sex, age), Res2Net 3D_w6 (adding lesion size, location, and smoking history), and Res2Net 3D_w10 (sex, age, location, the mean, maximum, and standard deviation of CT attenuation, nodule volume, volume ratio, volume ratio within the left/right lung, and the maximum CT value of the entire lung). Model performance was evaluated using accuracy, recall, precision, F1-score, and area under the receiver operating characteristic curve (AUC).
Results
Res2Net 3D outperformed others, achieving AUCs of 0.91 (AAH/AIS), 0.88 (MIA), and 0.92 (IA). Its F1-scores were 0.416, 0.500, and 0.929, respectively. All Res2Net variants achieved accuracies between 0.83–0.84.
Conclusion
The Res2Net 3D model accurately differentiates GGN subtypes using non-contrast CT, showing high performance, especially for invasive adenocarcinoma.