卷积长短时记忆注意门控U-Net用于二维超声心动图左心室自动分割

Zihan Lin, P. Tsui, Yan Zeng, Guangyu Bin, Shuicai Wu, Zhuhuang Zhou
{"title":"卷积长短时记忆注意门控U-Net用于二维超声心动图左心室自动分割","authors":"Zihan Lin, P. Tsui, Yan Zeng, Guangyu Bin, Shuicai Wu, Zhuhuang Zhou","doi":"10.1109/IUS54386.2022.9958784","DOIUrl":null,"url":null,"abstract":"Left ventricular ejection fraction is one of the important indices to evaluate cardiac function. Manual segmentation of the left ventricle (LV) in 2-D echocardiograms is tedious and time-consuming. We proposed a deep learning method called convolutional long-short-term-memory attention-gated U-Net (CLA-U-Net) for automatic segmentation of the LV in 2-D echocardiograms. The CLA-U-Net model was trained and tested using the EchoNet-Dynamic dataset. The dataset contained 9984 annotated echocardiogram videos (training set: 7456; validation set: 1296; test set 1232). The model was also tested on a private clinical dataset of 20 echocardiogram videos. U-Net was used as the basic encoder and decoder structure, and some very useful structures were designed. In the encoding part, we incorporated a convolutional long-short-term-memory (C-LSTM) block to guide the network to capture the temporal information between frames in the videos. In addition, we replaced the skip-connection structure of the original U-Net with a channel attention mechanism, which can amplify the desired feature signals and suppress the noise. With the proposed CLA-U-Net, the LV was segmented automatically on the EchoNet-Dynamic test set, and a Dice similarity coefficient (DSC) of 0.9311 was obtained. The DSC obtained by the DeepLabV3 network was 0.9236. The hyperparameters of CLA-U-Net were only 19.9 MB, reduced by ~91.6% as compared with DeepLabV3 network. For the private clinical dataset, a DSC of 0.9192 was obtained. Our CLA-U-Net achieved a desirable LV segmentation accuracy, with a lower amount of hyperparameters. The CLA-U-Net may be used as a new lightweight deep learning method for automatic LV segmentation in 2-D echocardiograms.","PeriodicalId":272387,"journal":{"name":"2022 IEEE International Ultrasonics Symposium (IUS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CLA-U-Net: Convolutional Long-short-term-memory Attention-gated U-Net for Automatic Segmentation of the Left Ventricle in 2-D Echocardiograms\",\"authors\":\"Zihan Lin, P. Tsui, Yan Zeng, Guangyu Bin, Shuicai Wu, Zhuhuang Zhou\",\"doi\":\"10.1109/IUS54386.2022.9958784\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Left ventricular ejection fraction is one of the important indices to evaluate cardiac function. Manual segmentation of the left ventricle (LV) in 2-D echocardiograms is tedious and time-consuming. We proposed a deep learning method called convolutional long-short-term-memory attention-gated U-Net (CLA-U-Net) for automatic segmentation of the LV in 2-D echocardiograms. The CLA-U-Net model was trained and tested using the EchoNet-Dynamic dataset. The dataset contained 9984 annotated echocardiogram videos (training set: 7456; validation set: 1296; test set 1232). The model was also tested on a private clinical dataset of 20 echocardiogram videos. U-Net was used as the basic encoder and decoder structure, and some very useful structures were designed. In the encoding part, we incorporated a convolutional long-short-term-memory (C-LSTM) block to guide the network to capture the temporal information between frames in the videos. In addition, we replaced the skip-connection structure of the original U-Net with a channel attention mechanism, which can amplify the desired feature signals and suppress the noise. With the proposed CLA-U-Net, the LV was segmented automatically on the EchoNet-Dynamic test set, and a Dice similarity coefficient (DSC) of 0.9311 was obtained. The DSC obtained by the DeepLabV3 network was 0.9236. The hyperparameters of CLA-U-Net were only 19.9 MB, reduced by ~91.6% as compared with DeepLabV3 network. For the private clinical dataset, a DSC of 0.9192 was obtained. Our CLA-U-Net achieved a desirable LV segmentation accuracy, with a lower amount of hyperparameters. The CLA-U-Net may be used as a new lightweight deep learning method for automatic LV segmentation in 2-D echocardiograms.\",\"PeriodicalId\":272387,\"journal\":{\"name\":\"2022 IEEE International Ultrasonics Symposium (IUS)\",\"volume\":\"45 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Ultrasonics Symposium (IUS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IUS54386.2022.9958784\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Ultrasonics Symposium (IUS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IUS54386.2022.9958784","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

左室射血分数是评价心功能的重要指标之一。在二维超声心动图中手动分割左心室(LV)是繁琐且耗时的。我们提出了一种深度学习方法,称为卷积长短期记忆注意门控U-Net (CLA-U-Net),用于二维超声心动图中LV的自动分割。CLA-U-Net模型使用EchoNet-Dynamic数据集进行训练和测试。该数据集包含9984个带注释的超声心动图视频(训练集:7456;验证集:1296;测试集1232)。该模型还在20个超声心动图视频的私人临床数据集上进行了测试。采用U-Net作为编码器和解码器的基本结构,并设计了一些非常有用的结构。在编码部分,我们引入了一个卷积长短期记忆(C-LSTM)块来引导网络捕获视频帧间的时间信息。此外,我们用信道注意机制取代了原有U-Net的跳过连接结构,可以放大所需的特征信号并抑制噪声。利用本文提出的CLA-U-Net,在EchoNet-Dynamic测试集上对LV进行了自动分割,得到了Dice相似系数(DSC)为0.9311。DeepLabV3网络得到的DSC为0.9236。CLA-U-Net的超参数仅为19.9 MB,与DeepLabV3网络相比降低了~91.6%。对于私人临床数据集,DSC为0.9192。我们的CLA-U-Net以较低的超参数量实现了理想的LV分割精度。CLA-U-Net可以作为一种新的轻量级深度学习方法用于二维超声心动图的LV自动分割。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CLA-U-Net: Convolutional Long-short-term-memory Attention-gated U-Net for Automatic Segmentation of the Left Ventricle in 2-D Echocardiograms
Left ventricular ejection fraction is one of the important indices to evaluate cardiac function. Manual segmentation of the left ventricle (LV) in 2-D echocardiograms is tedious and time-consuming. We proposed a deep learning method called convolutional long-short-term-memory attention-gated U-Net (CLA-U-Net) for automatic segmentation of the LV in 2-D echocardiograms. The CLA-U-Net model was trained and tested using the EchoNet-Dynamic dataset. The dataset contained 9984 annotated echocardiogram videos (training set: 7456; validation set: 1296; test set 1232). The model was also tested on a private clinical dataset of 20 echocardiogram videos. U-Net was used as the basic encoder and decoder structure, and some very useful structures were designed. In the encoding part, we incorporated a convolutional long-short-term-memory (C-LSTM) block to guide the network to capture the temporal information between frames in the videos. In addition, we replaced the skip-connection structure of the original U-Net with a channel attention mechanism, which can amplify the desired feature signals and suppress the noise. With the proposed CLA-U-Net, the LV was segmented automatically on the EchoNet-Dynamic test set, and a Dice similarity coefficient (DSC) of 0.9311 was obtained. The DSC obtained by the DeepLabV3 network was 0.9236. The hyperparameters of CLA-U-Net were only 19.9 MB, reduced by ~91.6% as compared with DeepLabV3 network. For the private clinical dataset, a DSC of 0.9192 was obtained. Our CLA-U-Net achieved a desirable LV segmentation accuracy, with a lower amount of hyperparameters. The CLA-U-Net may be used as a new lightweight deep learning method for automatic LV segmentation in 2-D echocardiograms.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信