Samuel Karmiy, Zhe Huang, Divya Velury, Eileen Mai, Jing Li, Monica M Dehn, Dikran R Balian, Davinder Ramsingh, John Martin, Jacob Kantrowitz, Ayan R Patel, Michael C Hughes, Benjamin S Wessler
{"title":"手持式超声主动脉瓣狭窄的机器学习筛查。","authors":"Samuel Karmiy, Zhe Huang, Divya Velury, Eileen Mai, Jing Li, Monica M Dehn, Dikran R Balian, Davinder Ramsingh, John Martin, Jacob Kantrowitz, Ayan R Patel, Michael C Hughes, Benjamin S Wessler","doi":"10.1093/ehjimp/qyaf051","DOIUrl":null,"url":null,"abstract":"<p><strong>Aims: </strong>Neural network classifiers can detect aortic stenosis (AS) using limited cardiac ultrasound images. While networks perform very well using cart-based imaging, they have never been tested or fine-tuned for use with focused cardiac ultrasound (FoCUS) acquisitions obtained on handheld ultrasound devices.</p><p><strong>Methods and results: </strong>Prospective study performed at Tufts Medical Center. All patients ≥65 years of age referred for clinically indicated transthoracic echocardigraphy (TTE) were eligible for inclusion. Parasternal long axis and parasternal short axis imaging was acquired using a commercially available handheld ultrasound device. Our cart-based AS classifier (trained on ∼10 000 images) was tested on FoCUS imaging from 160 patients. The median age was 74 (inter-quartile range 69-80) years, 50% of patients were women. Thirty patients (18.8%) had some degree of AS. The area under the received operator curve (AUROC) of the cart-based model for detecting AS was 0.87 (95% CI 0.75-0.99) on the FoCUS test set. Last-layer fine-tuning on handheld data established a classifier with AUROC of 0.94 (0.91-0.97). AUROC during temporal external validation was 0.97 (95% CI 0.89-1.0). When performance of the fine-tuned AS classifier was modelled on potential screening environments (2 and 10% AS prevalence), the positive predictive value ranged from 0.72 (0.69-0.76) to 0.88 (0.81-0.97) and negative predictive value ranged from 0.94 (0.94-0.94) to 0.99 (0.99-0.99) respectively.</p><p><strong>Conclusion: </strong>Our cart-based machine-learning model for AS showed a drop in performance when tested on handheld ultrasound imaging collected by sonographers. Fine-tuning the AS classifier improved performance and demonstrates potential as a novel approach to detecting AS through automated interpretation of handheld imaging.</p>","PeriodicalId":94317,"journal":{"name":"European heart journal. Imaging methods and practice","volume":"3 1","pages":"qyaf051"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12089772/pdf/","citationCount":"0","resultStr":"{\"title\":\"Machine learning-enabled screening for aortic stenosis with handheld ultrasound.\",\"authors\":\"Samuel Karmiy, Zhe Huang, Divya Velury, Eileen Mai, Jing Li, Monica M Dehn, Dikran R Balian, Davinder Ramsingh, John Martin, Jacob Kantrowitz, Ayan R Patel, Michael C Hughes, Benjamin S Wessler\",\"doi\":\"10.1093/ehjimp/qyaf051\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Aims: </strong>Neural network classifiers can detect aortic stenosis (AS) using limited cardiac ultrasound images. While networks perform very well using cart-based imaging, they have never been tested or fine-tuned for use with focused cardiac ultrasound (FoCUS) acquisitions obtained on handheld ultrasound devices.</p><p><strong>Methods and results: </strong>Prospective study performed at Tufts Medical Center. All patients ≥65 years of age referred for clinically indicated transthoracic echocardigraphy (TTE) were eligible for inclusion. Parasternal long axis and parasternal short axis imaging was acquired using a commercially available handheld ultrasound device. Our cart-based AS classifier (trained on ∼10 000 images) was tested on FoCUS imaging from 160 patients. The median age was 74 (inter-quartile range 69-80) years, 50% of patients were women. Thirty patients (18.8%) had some degree of AS. The area under the received operator curve (AUROC) of the cart-based model for detecting AS was 0.87 (95% CI 0.75-0.99) on the FoCUS test set. Last-layer fine-tuning on handheld data established a classifier with AUROC of 0.94 (0.91-0.97). AUROC during temporal external validation was 0.97 (95% CI 0.89-1.0). When performance of the fine-tuned AS classifier was modelled on potential screening environments (2 and 10% AS prevalence), the positive predictive value ranged from 0.72 (0.69-0.76) to 0.88 (0.81-0.97) and negative predictive value ranged from 0.94 (0.94-0.94) to 0.99 (0.99-0.99) respectively.</p><p><strong>Conclusion: </strong>Our cart-based machine-learning model for AS showed a drop in performance when tested on handheld ultrasound imaging collected by sonographers. Fine-tuning the AS classifier improved performance and demonstrates potential as a novel approach to detecting AS through automated interpretation of handheld imaging.</p>\",\"PeriodicalId\":94317,\"journal\":{\"name\":\"European heart journal. Imaging methods and practice\",\"volume\":\"3 1\",\"pages\":\"qyaf051\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-05-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12089772/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European heart journal. Imaging methods and practice\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/ehjimp/qyaf051\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European heart journal. Imaging methods and practice","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/ehjimp/qyaf051","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
目的:神经网络分类器可以利用有限的心脏超声图像检测主动脉瓣狭窄。虽然网络在使用基于小车的成像时表现非常好,但它们从未经过测试或微调,以用于在手持超声设备上获得的聚焦心脏超声(FoCUS)采集。方法和结果:在塔夫茨医学中心进行的前瞻性研究。所有年龄≥65岁接受经胸超声心动图检查(TTE)的患者均符合入选条件。使用市售的手持式超声设备获得胸骨旁长轴和胸骨旁短轴成像。我们基于cart的AS分类器(训练了约10,000张图像)在160例患者的FoCUS成像上进行了测试。中位年龄为74岁(四分位数范围69-80),50%的患者为女性。30例(18.8%)有不同程度的AS。在FoCUS测试集上,基于cart的AS检测模型的接收算子曲线下面积(AUROC)为0.87 (95% CI 0.75-0.99)。对手持数据进行最后一层微调,建立AUROC为0.94(0.91-0.97)的分类器。时间外部验证的AUROC为0.97 (95% CI 0.89-1.0)。当对潜在筛选环境(2和10%的AS患病率)的性能进行建模时,阳性预测值为0.72(0.69-0.76)至0.88(0.81-0.97),阴性预测值为0.94(0.94-0.94)至0.99(0.99-0.99)。结论:我们的基于小车的AS机器学习模型在超声医师收集的手持式超声图像上进行测试时显示出性能下降。微调AS分类器提高了性能,并展示了通过手持成像的自动解释来检测AS的新方法的潜力。
Machine learning-enabled screening for aortic stenosis with handheld ultrasound.
Aims: Neural network classifiers can detect aortic stenosis (AS) using limited cardiac ultrasound images. While networks perform very well using cart-based imaging, they have never been tested or fine-tuned for use with focused cardiac ultrasound (FoCUS) acquisitions obtained on handheld ultrasound devices.
Methods and results: Prospective study performed at Tufts Medical Center. All patients ≥65 years of age referred for clinically indicated transthoracic echocardigraphy (TTE) were eligible for inclusion. Parasternal long axis and parasternal short axis imaging was acquired using a commercially available handheld ultrasound device. Our cart-based AS classifier (trained on ∼10 000 images) was tested on FoCUS imaging from 160 patients. The median age was 74 (inter-quartile range 69-80) years, 50% of patients were women. Thirty patients (18.8%) had some degree of AS. The area under the received operator curve (AUROC) of the cart-based model for detecting AS was 0.87 (95% CI 0.75-0.99) on the FoCUS test set. Last-layer fine-tuning on handheld data established a classifier with AUROC of 0.94 (0.91-0.97). AUROC during temporal external validation was 0.97 (95% CI 0.89-1.0). When performance of the fine-tuned AS classifier was modelled on potential screening environments (2 and 10% AS prevalence), the positive predictive value ranged from 0.72 (0.69-0.76) to 0.88 (0.81-0.97) and negative predictive value ranged from 0.94 (0.94-0.94) to 0.99 (0.99-0.99) respectively.
Conclusion: Our cart-based machine-learning model for AS showed a drop in performance when tested on handheld ultrasound imaging collected by sonographers. Fine-tuning the AS classifier improved performance and demonstrates potential as a novel approach to detecting AS through automated interpretation of handheld imaging.