Snapshot artificial intelligence—determination of ejection fraction from a single frame still image: a multi-institutional, retrospective model development and validation study

IF 23.8 1区 医学 Q1 MEDICAL INFORMATICS
Jeffrey G Malins PhD , D M Anisuzzaman PhD , John I Jackson PhD , Eunjung Lee PhD , Jwan A Naser MBBS , Behrouz Rostami PhD , Jared G Bird MD , Dan Spiegelstein MD , Talia Amar MSc , Prof Jae K Oh MD , Prof Patricia A Pellikka MD , Jeremy J Thaden MD , Prof Francisco Lopez-Jimenez MD MSc , Prof Sorin V Pislaru MD PhD , Prof Paul A Friedman MD , Prof Garvan C Kane MD PhD , Zachi I Attia PhD
{"title":"Snapshot artificial intelligence—determination of ejection fraction from a single frame still image: a multi-institutional, retrospective model development and validation study","authors":"Jeffrey G Malins PhD ,&nbsp;D M Anisuzzaman PhD ,&nbsp;John I Jackson PhD ,&nbsp;Eunjung Lee PhD ,&nbsp;Jwan A Naser MBBS ,&nbsp;Behrouz Rostami PhD ,&nbsp;Jared G Bird MD ,&nbsp;Dan Spiegelstein MD ,&nbsp;Talia Amar MSc ,&nbsp;Prof Jae K Oh MD ,&nbsp;Prof Patricia A Pellikka MD ,&nbsp;Jeremy J Thaden MD ,&nbsp;Prof Francisco Lopez-Jimenez MD MSc ,&nbsp;Prof Sorin V Pislaru MD PhD ,&nbsp;Prof Paul A Friedman MD ,&nbsp;Prof Garvan C Kane MD PhD ,&nbsp;Zachi I Attia PhD","doi":"10.1016/j.landig.2025.02.003","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Artificial intelligence (AI) is poised to transform point-of-care practice by providing rapid snapshots of cardiac functioning. Although previous AI models have been developed to estimate left ventricular ejection fraction (LVEF), they have typically used video clips as input, which can be computationally intensive. In the current study, we aimed to develop an LVEF estimation model that takes in static frames as input.</div></div><div><h3>Methods</h3><div>Using retrospective transthoracic echocardiography (TTE) data from Mayo Clinic Rochester and Mayo Clinic Health System sites (training: n=19 627; interval validation: n=862), we developed a two-dimensional convolutional neural network model that provides an LVEF estimate associated with an input frame from an echocardiogram video. We then evaluated model performance for Mayo Clinic TTE data (Rochester, n=1890; Arizona, n=1695; Florida, n=1862), the EchoNet-Dynamic TTE dataset (n=10 015), a prospective cohort of patients from whom TTE and handheld cardiac ultrasound (HCU) were simultaneously collected (n=625), and a prospective cohort of patients from whom HCU clips were collected by expert sonographers and novice users (n=100, distributed across three external sites).</div></div><div><h3>Findings</h3><div>We observed consistently strong model performance when estimates from single frames were averaged across multiple video clips, even when only one frame was taken per video (for classifying LVEF ≤40% <em>vs</em> LVEF&gt;40%, area under the receiver operating characteristic curve [AUC]&gt;0·90 for all datasets except for HCU clips collected by novice users, for which AUC&gt;0·85). We also observed that LVEF estimates differed slightly depending on the phase of the cardiac cycle when images were captured.</div></div><div><h3>Interpretation</h3><div>When aiming to rapidly deploy such models, single frames from multiple videos might be sufficient for LVEF classification. Furthermore, the observed sensitivity to the cardiac cycle offers some insights on model performance from an explainability perspective.</div></div><div><h3>Funding</h3><div>Internal institutional funds provided by the Department of Cardiovascular Medicine, Mayo Clinic, Rochester, MN, USA.</div></div>","PeriodicalId":48534,"journal":{"name":"Lancet Digital Health","volume":"7 4","pages":"Pages e255-e263"},"PeriodicalIF":23.8000,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Lancet Digital Health","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2589750025000275","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0

Abstract

Background

Artificial intelligence (AI) is poised to transform point-of-care practice by providing rapid snapshots of cardiac functioning. Although previous AI models have been developed to estimate left ventricular ejection fraction (LVEF), they have typically used video clips as input, which can be computationally intensive. In the current study, we aimed to develop an LVEF estimation model that takes in static frames as input.

Methods

Using retrospective transthoracic echocardiography (TTE) data from Mayo Clinic Rochester and Mayo Clinic Health System sites (training: n=19 627; interval validation: n=862), we developed a two-dimensional convolutional neural network model that provides an LVEF estimate associated with an input frame from an echocardiogram video. We then evaluated model performance for Mayo Clinic TTE data (Rochester, n=1890; Arizona, n=1695; Florida, n=1862), the EchoNet-Dynamic TTE dataset (n=10 015), a prospective cohort of patients from whom TTE and handheld cardiac ultrasound (HCU) were simultaneously collected (n=625), and a prospective cohort of patients from whom HCU clips were collected by expert sonographers and novice users (n=100, distributed across three external sites).

Findings

We observed consistently strong model performance when estimates from single frames were averaged across multiple video clips, even when only one frame was taken per video (for classifying LVEF ≤40% vs LVEF>40%, area under the receiver operating characteristic curve [AUC]>0·90 for all datasets except for HCU clips collected by novice users, for which AUC>0·85). We also observed that LVEF estimates differed slightly depending on the phase of the cardiac cycle when images were captured.

Interpretation

When aiming to rapidly deploy such models, single frames from multiple videos might be sufficient for LVEF classification. Furthermore, the observed sensitivity to the cardiac cycle offers some insights on model performance from an explainability perspective.

Funding

Internal institutional funds provided by the Department of Cardiovascular Medicine, Mayo Clinic, Rochester, MN, USA.
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
41.20
自引率
1.60%
发文量
232
审稿时长
13 weeks
期刊介绍: The Lancet Digital Health publishes important, innovative, and practice-changing research on any topic connected with digital technology in clinical medicine, public health, and global health. The journal’s open access content crosses subject boundaries, building bridges between health professionals and researchers.By bringing together the most important advances in this multidisciplinary field,The Lancet Digital Health is the most prominent publishing venue in digital health. We publish a range of content types including Articles,Review, Comment, and Correspondence, contributing to promoting digital technologies in health practice worldwide.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信