Yin Wang MD, PhD , Mengjun Shen MD , Ke Bi MD , Wei Yang MD, PhD , Xiaofei Ye MD, PhD , Qing Tang MD, PhD , Yi Zhang MD , Yang Cong MD , Huiming Zhu MD , Hongwei Chen MD , Chunhong Tang MD , Martin R. Prince MD, PhD
{"title":"A Dual-Modality Ultrasound Video Recognition Model for Distinguishing Subpleural Pulmonary Nodules","authors":"Yin Wang MD, PhD , Mengjun Shen MD , Ke Bi MD , Wei Yang MD, PhD , Xiaofei Ye MD, PhD , Qing Tang MD, PhD , Yi Zhang MD , Yang Cong MD , Huiming Zhu MD , Hongwei Chen MD , Chunhong Tang MD , Martin R. Prince MD, PhD","doi":"10.1016/j.mayocpiqo.2025.100659","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><div>To develop a deep learning model based on dual-modality ultrasound (DMUS) video recognition for the differential diagnosis of benign and malignant subpleural pulmonary nodules (SPNs).</div></div><div><h3>Patients and Methods</h3><div>Participant data (n=193, median age, 58 years [IQR, 34-66 years]; 123 men) with SPNs, prospectively collected from January 7, to December 21, 2020, were divided into training (n=154) and validation (n=39) sets in an 8:2 ratio. Additionally, independent internal (n=88) and external (n=91) test sets were prospectively collected from January 10 to June 25, 2021. The nature of the SPNs was determined through biopsy (n=306) and clinical follow-up (n=66). Our model integrated DMUS videos, time-intensity curves, and clinical information. The model’s performance was evaluated using area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity and compared with state-of-the-art video classification models, as well as ultrasound and computed tomography diagnoses made by radiologists.</div></div><div><h3>Results</h3><div>In the internal test set, our model accurately distinguished malignant from benign SPNs with an AUC, accuracy, sensitivity, and specificity of 0.91, 91% (80 of 88), 90% (27 of 30), and 91% (53 of 58), outperforming state-of-the-art video classification models (all <em>P</em><.05). In the external test set, the model achieved the accuracy, sensitivity, and specificity of 89% (81 of 91), 84% (27 of 32), and 92% (54 of 59), which were higher than the parameters for radiologist interpretations of ultrasound (81% [74 of 91], 63% [20 of 32], and 92% [54 of 59]) and computed tomography (76% [69 of 91], 91% [29 of 32], and 68% [40 of 59]), respectively.</div></div><div><h3>Conclusion</h3><div>This deep learning model based on DMUS video recognition enhances the performance of ultrasound in differentiating benign from malignant SPNs.</div></div><div><h3>Trial Registration</h3><div><span><span>clinicaltrials.gov</span><svg><path></path></svg></span> Identifier: ChiCTR1800019828</div></div>","PeriodicalId":94132,"journal":{"name":"Mayo Clinic proceedings. Innovations, quality & outcomes","volume":"9 5","pages":"Article 100659"},"PeriodicalIF":0.0000,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mayo Clinic proceedings. Innovations, quality & outcomes","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2542454825000700","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Objective
To develop a deep learning model based on dual-modality ultrasound (DMUS) video recognition for the differential diagnosis of benign and malignant subpleural pulmonary nodules (SPNs).
Patients and Methods
Participant data (n=193, median age, 58 years [IQR, 34-66 years]; 123 men) with SPNs, prospectively collected from January 7, to December 21, 2020, were divided into training (n=154) and validation (n=39) sets in an 8:2 ratio. Additionally, independent internal (n=88) and external (n=91) test sets were prospectively collected from January 10 to June 25, 2021. The nature of the SPNs was determined through biopsy (n=306) and clinical follow-up (n=66). Our model integrated DMUS videos, time-intensity curves, and clinical information. The model’s performance was evaluated using area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity and compared with state-of-the-art video classification models, as well as ultrasound and computed tomography diagnoses made by radiologists.
Results
In the internal test set, our model accurately distinguished malignant from benign SPNs with an AUC, accuracy, sensitivity, and specificity of 0.91, 91% (80 of 88), 90% (27 of 30), and 91% (53 of 58), outperforming state-of-the-art video classification models (all P<.05). In the external test set, the model achieved the accuracy, sensitivity, and specificity of 89% (81 of 91), 84% (27 of 32), and 92% (54 of 59), which were higher than the parameters for radiologist interpretations of ultrasound (81% [74 of 91], 63% [20 of 32], and 92% [54 of 59]) and computed tomography (76% [69 of 91], 91% [29 of 32], and 68% [40 of 59]), respectively.
Conclusion
This deep learning model based on DMUS video recognition enhances the performance of ultrasound in differentiating benign from malignant SPNs.