Snapshot artificial intelligence—determination of ejection fraction from a single frame still image: a multi-institutional, retrospective model development and validation study

IF 23.8 1区 医学 Q1 MEDICAL INFORMATICS
Jeffrey G Malins PhD , D M Anisuzzaman PhD , John I Jackson PhD , Eunjung Lee PhD , Jwan A Naser MBBS , Behrouz Rostami PhD , Jared G Bird MD , Dan Spiegelstein MD , Talia Amar MSc , Prof Jae K Oh MD , Prof Patricia A Pellikka MD , Jeremy J Thaden MD , Prof Francisco Lopez-Jimenez MD MSc , Prof Sorin V Pislaru MD PhD , Prof Paul A Friedman MD , Prof Garvan C Kane MD PhD , Zachi I Attia PhD
{"title":"Snapshot artificial intelligence—determination of ejection fraction from a single frame still image: a multi-institutional, retrospective model development and validation study","authors":"Jeffrey G Malins PhD ,&nbsp;D M Anisuzzaman PhD ,&nbsp;John I Jackson PhD ,&nbsp;Eunjung Lee PhD ,&nbsp;Jwan A Naser MBBS ,&nbsp;Behrouz Rostami PhD ,&nbsp;Jared G Bird MD ,&nbsp;Dan Spiegelstein MD ,&nbsp;Talia Amar MSc ,&nbsp;Prof Jae K Oh MD ,&nbsp;Prof Patricia A Pellikka MD ,&nbsp;Jeremy J Thaden MD ,&nbsp;Prof Francisco Lopez-Jimenez MD MSc ,&nbsp;Prof Sorin V Pislaru MD PhD ,&nbsp;Prof Paul A Friedman MD ,&nbsp;Prof Garvan C Kane MD PhD ,&nbsp;Zachi I Attia PhD","doi":"10.1016/j.landig.2025.02.003","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Artificial intelligence (AI) is poised to transform point-of-care practice by providing rapid snapshots of cardiac functioning. Although previous AI models have been developed to estimate left ventricular ejection fraction (LVEF), they have typically used video clips as input, which can be computationally intensive. In the current study, we aimed to develop an LVEF estimation model that takes in static frames as input.</div></div><div><h3>Methods</h3><div>Using retrospective transthoracic echocardiography (TTE) data from Mayo Clinic Rochester and Mayo Clinic Health System sites (training: n=19 627; interval validation: n=862), we developed a two-dimensional convolutional neural network model that provides an LVEF estimate associated with an input frame from an echocardiogram video. We then evaluated model performance for Mayo Clinic TTE data (Rochester, n=1890; Arizona, n=1695; Florida, n=1862), the EchoNet-Dynamic TTE dataset (n=10 015), a prospective cohort of patients from whom TTE and handheld cardiac ultrasound (HCU) were simultaneously collected (n=625), and a prospective cohort of patients from whom HCU clips were collected by expert sonographers and novice users (n=100, distributed across three external sites).</div></div><div><h3>Findings</h3><div>We observed consistently strong model performance when estimates from single frames were averaged across multiple video clips, even when only one frame was taken per video (for classifying LVEF ≤40% <em>vs</em> LVEF&gt;40%, area under the receiver operating characteristic curve [AUC]&gt;0·90 for all datasets except for HCU clips collected by novice users, for which AUC&gt;0·85). We also observed that LVEF estimates differed slightly depending on the phase of the cardiac cycle when images were captured.</div></div><div><h3>Interpretation</h3><div>When aiming to rapidly deploy such models, single frames from multiple videos might be sufficient for LVEF classification. Furthermore, the observed sensitivity to the cardiac cycle offers some insights on model performance from an explainability perspective.</div></div><div><h3>Funding</h3><div>Internal institutional funds provided by the Department of Cardiovascular Medicine, Mayo Clinic, Rochester, MN, USA.</div></div>","PeriodicalId":48534,"journal":{"name":"Lancet Digital Health","volume":"7 4","pages":"Pages e255-e263"},"PeriodicalIF":23.8000,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Lancet Digital Health","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2589750025000275","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0

Abstract

Background

Artificial intelligence (AI) is poised to transform point-of-care practice by providing rapid snapshots of cardiac functioning. Although previous AI models have been developed to estimate left ventricular ejection fraction (LVEF), they have typically used video clips as input, which can be computationally intensive. In the current study, we aimed to develop an LVEF estimation model that takes in static frames as input.

Methods

Using retrospective transthoracic echocardiography (TTE) data from Mayo Clinic Rochester and Mayo Clinic Health System sites (training: n=19 627; interval validation: n=862), we developed a two-dimensional convolutional neural network model that provides an LVEF estimate associated with an input frame from an echocardiogram video. We then evaluated model performance for Mayo Clinic TTE data (Rochester, n=1890; Arizona, n=1695; Florida, n=1862), the EchoNet-Dynamic TTE dataset (n=10 015), a prospective cohort of patients from whom TTE and handheld cardiac ultrasound (HCU) were simultaneously collected (n=625), and a prospective cohort of patients from whom HCU clips were collected by expert sonographers and novice users (n=100, distributed across three external sites).

Findings

We observed consistently strong model performance when estimates from single frames were averaged across multiple video clips, even when only one frame was taken per video (for classifying LVEF ≤40% vs LVEF>40%, area under the receiver operating characteristic curve [AUC]>0·90 for all datasets except for HCU clips collected by novice users, for which AUC>0·85). We also observed that LVEF estimates differed slightly depending on the phase of the cardiac cycle when images were captured.

Interpretation

When aiming to rapidly deploy such models, single frames from multiple videos might be sufficient for LVEF classification. Furthermore, the observed sensitivity to the cardiac cycle offers some insights on model performance from an explainability perspective.

Funding

Internal institutional funds provided by the Department of Cardiovascular Medicine, Mayo Clinic, Rochester, MN, USA.
快照人工智能-从单帧静止图像中确定射血分数:多机构,回顾性模型开发和验证研究
人工智能(AI)有望通过提供心脏功能的快速快照来改变护理点实践。虽然以前的人工智能模型已经开发出来用于估计左心室射血分数(LVEF),但它们通常使用视频片段作为输入,这可能是计算密集型的。在当前的研究中,我们旨在开发一个以静态帧为输入的LVEF估计模型。方法利用梅奥诊所罗切斯特和梅奥诊所卫生系统站点的回顾性经胸超声心动图(TTE)数据(训练:n= 19627;区间验证:n=862),我们开发了一个二维卷积神经网络模型,该模型提供了与超声心动图视频输入帧相关的LVEF估计。然后,我们评估了梅奥诊所TTE数据的模型性能(Rochester, n=1890;亚利桑那州,n = 1695;佛罗里达州,n=1862), EchoNet-Dynamic心脏超声数据集(n= 10015),同时收集TTE和手持式心脏超声(HCU)的患者的前瞻性队列(n=625),以及由专家超声医师和新手用户收集HCU夹的患者的前瞻性队列(n=100,分布在三个外部站点)。我们观察到,当对多个视频片段进行单帧估计的平均值时,即使每个视频只拍摄一帧,模型的表现也始终很好(对于LVEF≤40% vs LVEF>;40%的分类,接收器工作特征曲线下面积[AUC]>; 0.90,除了新手用户收集的HCU片段,其AUC>; 0.85)。我们还观察到,LVEF的估计值根据拍摄图像时心脏周期的阶段略有不同。当目标是快速部署这样的模型时,来自多个视频的单帧可能足以用于LVEF分类。此外,观察到的对心脏周期的敏感性从可解释性的角度提供了对模型性能的一些见解。内部机构资金由Mayo Clinic心血管内科提供,Rochester, MN, USA。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
41.20
自引率
1.60%
发文量
232
审稿时长
13 weeks
期刊介绍: The Lancet Digital Health publishes important, innovative, and practice-changing research on any topic connected with digital technology in clinical medicine, public health, and global health. The journal’s open access content crosses subject boundaries, building bridges between health professionals and researchers.By bringing together the most important advances in this multidisciplinary field,The Lancet Digital Health is the most prominent publishing venue in digital health. We publish a range of content types including Articles,Review, Comment, and Correspondence, contributing to promoting digital technologies in health practice worldwide.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信