A Dual-Modality Ultrasound Video Recognition Model for Distinguishing Subpleural Pulmonary Nodules

Yin Wang MD, PhD , Mengjun Shen MD , Ke Bi MD , Wei Yang MD, PhD , Xiaofei Ye MD, PhD , Qing Tang MD, PhD , Yi Zhang MD , Yang Cong MD , Huiming Zhu MD , Hongwei Chen MD , Chunhong Tang MD , Martin R. Prince MD, PhD
{"title":"A Dual-Modality Ultrasound Video Recognition Model for Distinguishing Subpleural Pulmonary Nodules","authors":"Yin Wang MD, PhD ,&nbsp;Mengjun Shen MD ,&nbsp;Ke Bi MD ,&nbsp;Wei Yang MD, PhD ,&nbsp;Xiaofei Ye MD, PhD ,&nbsp;Qing Tang MD, PhD ,&nbsp;Yi Zhang MD ,&nbsp;Yang Cong MD ,&nbsp;Huiming Zhu MD ,&nbsp;Hongwei Chen MD ,&nbsp;Chunhong Tang MD ,&nbsp;Martin R. Prince MD, PhD","doi":"10.1016/j.mayocpiqo.2025.100659","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><div>To develop a deep learning model based on dual-modality ultrasound (DMUS) video recognition for the differential diagnosis of benign and malignant subpleural pulmonary nodules (SPNs).</div></div><div><h3>Patients and Methods</h3><div>Participant data (n=193, median age, 58 years [IQR, 34-66 years]; 123 men) with SPNs, prospectively collected from January 7, to December 21, 2020, were divided into training (n=154) and validation (n=39) sets in an 8:2 ratio. Additionally, independent internal (n=88) and external (n=91) test sets were prospectively collected from January 10 to June 25, 2021. The nature of the SPNs was determined through biopsy (n=306) and clinical follow-up (n=66). Our model integrated DMUS videos, time-intensity curves, and clinical information. The model’s performance was evaluated using area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity and compared with state-of-the-art video classification models, as well as ultrasound and computed tomography diagnoses made by radiologists.</div></div><div><h3>Results</h3><div>In the internal test set, our model accurately distinguished malignant from benign SPNs with an AUC, accuracy, sensitivity, and specificity of 0.91, 91% (80 of 88), 90% (27 of 30), and 91% (53 of 58), outperforming state-of-the-art video classification models (all <em>P</em>&lt;.05). In the external test set, the model achieved the accuracy, sensitivity, and specificity of 89% (81 of 91), 84% (27 of 32), and 92% (54 of 59), which were higher than the parameters for radiologist interpretations of ultrasound (81% [74 of 91], 63% [20 of 32], and 92% [54 of 59]) and computed tomography (76% [69 of 91], 91% [29 of 32], and 68% [40 of 59]), respectively.</div></div><div><h3>Conclusion</h3><div>This deep learning model based on DMUS video recognition enhances the performance of ultrasound in differentiating benign from malignant SPNs.</div></div><div><h3>Trial Registration</h3><div><span><span>clinicaltrials.gov</span><svg><path></path></svg></span> Identifier: ChiCTR1800019828</div></div>","PeriodicalId":94132,"journal":{"name":"Mayo Clinic proceedings. Innovations, quality & outcomes","volume":"9 5","pages":"Article 100659"},"PeriodicalIF":0.0000,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mayo Clinic proceedings. Innovations, quality & outcomes","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2542454825000700","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Objective

To develop a deep learning model based on dual-modality ultrasound (DMUS) video recognition for the differential diagnosis of benign and malignant subpleural pulmonary nodules (SPNs).

Patients and Methods

Participant data (n=193, median age, 58 years [IQR, 34-66 years]; 123 men) with SPNs, prospectively collected from January 7, to December 21, 2020, were divided into training (n=154) and validation (n=39) sets in an 8:2 ratio. Additionally, independent internal (n=88) and external (n=91) test sets were prospectively collected from January 10 to June 25, 2021. The nature of the SPNs was determined through biopsy (n=306) and clinical follow-up (n=66). Our model integrated DMUS videos, time-intensity curves, and clinical information. The model’s performance was evaluated using area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity and compared with state-of-the-art video classification models, as well as ultrasound and computed tomography diagnoses made by radiologists.

Results

In the internal test set, our model accurately distinguished malignant from benign SPNs with an AUC, accuracy, sensitivity, and specificity of 0.91, 91% (80 of 88), 90% (27 of 30), and 91% (53 of 58), outperforming state-of-the-art video classification models (all P<.05). In the external test set, the model achieved the accuracy, sensitivity, and specificity of 89% (81 of 91), 84% (27 of 32), and 92% (54 of 59), which were higher than the parameters for radiologist interpretations of ultrasound (81% [74 of 91], 63% [20 of 32], and 92% [54 of 59]) and computed tomography (76% [69 of 91], 91% [29 of 32], and 68% [40 of 59]), respectively.

Conclusion

This deep learning model based on DMUS video recognition enhances the performance of ultrasound in differentiating benign from malignant SPNs.

Trial Registration

clinicaltrials.gov Identifier: ChiCTR1800019828
胸膜下肺结节的双模超声视频识别模型
目的建立基于双模超声(DMUS)视频识别的深度学习模型用于良恶性胸膜下肺结节(SPNs)鉴别诊断。患者和方法前瞻性收集2020年1月7日至12月21日SPNs患者数据(n=193,中位年龄58岁[IQR, 34-66岁];123名男性),按8:2的比例分为训练组(n=154)和验证组(n=39)。此外,在2021年1月10日至6月25日期间,前瞻性地收集了独立的内部(n=88)和外部(n=91)测试集。spn的性质通过活检(n=306)和临床随访(n=66)确定。我们的模型集成了DMUS视频、时间强度曲线和临床信息。该模型的性能通过接受者工作特征曲线下的面积、准确性、灵敏度和特异性进行评估,并与最先进的视频分类模型以及放射科医生的超声和计算机断层扫描诊断进行比较。结果在内部测试集中,我们的模型准确地区分了恶性和良性spn, AUC、准确度、灵敏度和特异性分别为0.91、91%(88分之80)、90%(30分之27)和91%(58分之53),优于最先进的视频分类模型(均为P<; 0.05)。在外部测试集中,该模型的准确性、灵敏度和特异性分别为89%(81 / 91)、84%(27 / 32)和92%(54 / 59),高于放射医师超声解释的参数(81%[74 / 91]、63%[20 / 32]和92%[54 / 59])和计算机断层扫描(76%[69 / 91]、91%[29 / 32]和68%[40 / 59])。结论基于DMUS视频识别的深度学习模型提高了超声对SPNs良恶性的鉴别能力。临床试验注册号:ChiCTR1800019828
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Mayo Clinic proceedings. Innovations, quality & outcomes
Mayo Clinic proceedings. Innovations, quality & outcomes Surgery, Critical Care and Intensive Care Medicine, Public Health and Health Policy
自引率
0.00%
发文量
0
审稿时长
49 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信