A combined system with convolutional neural networks and transformers for automated quantification of left ventricular ejection fraction from 2D echocardiographic images

IF 4.4 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Mingming Lin , Liwei Zhang , Zhibin Wang , Hengyu Liu , Keqiang Wang , Guozhang Tang , Wenkai Wang , Pin Sun
{"title":"A combined system with convolutional neural networks and transformers for automated quantification of left ventricular ejection fraction from 2D echocardiographic images","authors":"Mingming Lin ,&nbsp;Liwei Zhang ,&nbsp;Zhibin Wang ,&nbsp;Hengyu Liu ,&nbsp;Keqiang Wang ,&nbsp;Guozhang Tang ,&nbsp;Wenkai Wang ,&nbsp;Pin Sun","doi":"10.1016/j.imed.2024.10.001","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Accurate measurement of left ventricular ejection fraction (LVEF) is crucial in diagnosing and managing cardiac conditions. Deep learning (DL) models offer potential to improve the consistency and efficiency of these measurements, reducing reliance on operator expertise.</div></div><div><h3>Objective</h3><div>The aim of this study was to develop an innovative software-hardware combined device, featuring a novel DL algorithm for the automated quantification of LVEF from 2D echocardiographic images.</div></div><div><h3>Methods</h3><div>A dataset of 2,113 patients admitted to the Affiliated Hospital of Qingdao University between January and June 2023 was assembled and split into training and test groups. Another 500 patients from another campus were prospectively collected as external validation group. The age, sex, reason for echocardiography and the type of patients were collected. Following standardized protocol training by senior echocardiographers using domestic ultrasound equipment, apical four-chamber view images were labeled manually and utilized for training our deep learning framework. This system combined convolutional neural networks (CNN) with transformers for enhanced image recognition and analysis. Combined with the model that was named QHAutoEF, a ‘one-touch’ software module was developed and integrated into the echocardiography hardware, providing intuitive, real-time visualization of LVEF measurements. The device's performance was evaluated with metrics such as the Dice coefficient and Jaccard index, along with computational efficiency indicators. The dice index, intersection over union, size, floating point operations per second and calculation time were used to compare the performance of our model with alternative deep learning architectures. Bland-Altman analysis and the receiver operating characteristic (ROC) curve were used for validation of the accuracy of the model. The scatter plot was used to evaluate the consistency of the manual and automated results among subgroups.</div></div><div><h3>Results</h3><div>Patients from external validation group were older than those from training group ((60±14) years <em>vs.</em> (55±16) years, respectively, <em>P</em> &lt; 0.001). The gender distribution among three groups were showed no statistical difference (43 % <em>vs.</em> 42 % <em>vs.</em> 50 %, respectively, <em>P</em> = 0.095). Significant differences were showed among patients with different type (all <em>P</em> &lt; 0.001) and reason for echocardiography (all <em>P</em> &lt;0.001 except for other reasons). QHAutoEF achieved a high Dice index (0.942 at end-diastole, 0.917 at end-systole) with a notably compact model size (10.2 MB) and low computational cost (93.86 G floating point operations (FLOPs)). It exhibited high consistency with expert manual measurements (intraclass correlation coefficient (ICC) =0.90 (0.89, 0.92), <em>P</em> &lt; 0.001) and excellent capability to differentiate patients with LVEF ≥60 % from those with reduced function, yielding an area under the operation curve (AUC) of 0.92 (0.90–0.95). Subgroup analysis showed a good correlation between QHAutoEF results and manual results from experienced experts among patients of different types (<em>R</em> = 0.93, 0.73, 0.92, respectively, <em>P</em> &lt;0.001) and ages (<em>R</em> = 0.92, 0.94, 0.89, 0.91, 0.81, respectively, <em>P</em> &lt;0.001).</div></div><div><h3>Conclusions</h3><div>Our software-hardware device offers an improved solution for the automated measurement of LVEF, demonstrating not only high accuracy and consistency with manual expert measurements but also practical adaptability for clinical settings. This device might potentially support clinicians and augment clinical decision.</div></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"5 1","pages":"Pages 46-53"},"PeriodicalIF":4.4000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent medicine","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S266710262400086X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Background

Accurate measurement of left ventricular ejection fraction (LVEF) is crucial in diagnosing and managing cardiac conditions. Deep learning (DL) models offer potential to improve the consistency and efficiency of these measurements, reducing reliance on operator expertise.

Objective

The aim of this study was to develop an innovative software-hardware combined device, featuring a novel DL algorithm for the automated quantification of LVEF from 2D echocardiographic images.

Methods

A dataset of 2,113 patients admitted to the Affiliated Hospital of Qingdao University between January and June 2023 was assembled and split into training and test groups. Another 500 patients from another campus were prospectively collected as external validation group. The age, sex, reason for echocardiography and the type of patients were collected. Following standardized protocol training by senior echocardiographers using domestic ultrasound equipment, apical four-chamber view images were labeled manually and utilized for training our deep learning framework. This system combined convolutional neural networks (CNN) with transformers for enhanced image recognition and analysis. Combined with the model that was named QHAutoEF, a ‘one-touch’ software module was developed and integrated into the echocardiography hardware, providing intuitive, real-time visualization of LVEF measurements. The device's performance was evaluated with metrics such as the Dice coefficient and Jaccard index, along with computational efficiency indicators. The dice index, intersection over union, size, floating point operations per second and calculation time were used to compare the performance of our model with alternative deep learning architectures. Bland-Altman analysis and the receiver operating characteristic (ROC) curve were used for validation of the accuracy of the model. The scatter plot was used to evaluate the consistency of the manual and automated results among subgroups.

Results

Patients from external validation group were older than those from training group ((60±14) years vs. (55±16) years, respectively, P < 0.001). The gender distribution among three groups were showed no statistical difference (43 % vs. 42 % vs. 50 %, respectively, P = 0.095). Significant differences were showed among patients with different type (all P < 0.001) and reason for echocardiography (all P <0.001 except for other reasons). QHAutoEF achieved a high Dice index (0.942 at end-diastole, 0.917 at end-systole) with a notably compact model size (10.2 MB) and low computational cost (93.86 G floating point operations (FLOPs)). It exhibited high consistency with expert manual measurements (intraclass correlation coefficient (ICC) =0.90 (0.89, 0.92), P < 0.001) and excellent capability to differentiate patients with LVEF ≥60 % from those with reduced function, yielding an area under the operation curve (AUC) of 0.92 (0.90–0.95). Subgroup analysis showed a good correlation between QHAutoEF results and manual results from experienced experts among patients of different types (R = 0.93, 0.73, 0.92, respectively, P <0.001) and ages (R = 0.92, 0.94, 0.89, 0.91, 0.81, respectively, P <0.001).

Conclusions

Our software-hardware device offers an improved solution for the automated measurement of LVEF, demonstrating not only high accuracy and consistency with manual expert measurements but also practical adaptability for clinical settings. This device might potentially support clinicians and augment clinical decision.

Abstract Image

求助全文
约1分钟内获得全文 求助全文
来源期刊
Intelligent medicine
Intelligent medicine Surgery, Radiology and Imaging, Artificial Intelligence, Biomedical Engineering
CiteScore
5.20
自引率
0.00%
发文量
19
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信