Exploiting temporal information in echocardiography for improved image segmentation

Jieyu Hu, E. Smistad, I. M. Salte, H. Dalen, L. Løvstakken
{"title":"Exploiting temporal information in echocardiography for improved image segmentation","authors":"Jieyu Hu, E. Smistad, I. M. Salte, H. Dalen, L. Løvstakken","doi":"10.1109/IUS54386.2022.9958670","DOIUrl":null,"url":null,"abstract":"Echocardiography is based on evaluating cineloops, where the temporal information is important for diagnosis. This information is seldom fully utilized in image analyses based on deep learning due to the massive manual annotation work required. In this work, we investigate the use of temporal information for the left heart segmentation throughout the cardiac cycle, both to enhance the training of simpler networks and for spatiotemporal neural networks to ensure consistent segmentation over time. Fully annotated cineloops were achieved in a semi-supervised manner, using pseudo-labeling from a network trained using limited annotations from the cardiac cycle. A temporal outlier removal method was developed to avoid artefact annotations. The study used $\\mathbf{N}\\boldsymbol{=174}$ recordings with A2C, A3C, and A4C views annotated at 7 frames, targeted at ES/ED and challenging cardiac cycle time points, with a testing set of $\\mathbf{N}\\boldsymbol{=25}$. We compared the performance of non-temporal U-Net segmentation trained with and without fully annotated cineloops, and by adding convLSTM layers in various configurations (encoder/decoder) to improve temporal consistency. Compared to the baseline U-Net trained at ES/ED, adding extra annotations targeted at time points with typical issues (e.g. valve opening), reduced outliers significantly and improved the average Dice. The fully automated pseudo-labeling exploited all frames, reduced outliers, and increased Dice to the same level as extra manual annotations. This approach also enabled the training of spatiotemporal networks. Adding convLSTM layers at each level in the encoder provided the best results.","PeriodicalId":272387,"journal":{"name":"2022 IEEE International Ultrasonics Symposium (IUS)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Ultrasonics Symposium (IUS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IUS54386.2022.9958670","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Echocardiography is based on evaluating cineloops, where the temporal information is important for diagnosis. This information is seldom fully utilized in image analyses based on deep learning due to the massive manual annotation work required. In this work, we investigate the use of temporal information for the left heart segmentation throughout the cardiac cycle, both to enhance the training of simpler networks and for spatiotemporal neural networks to ensure consistent segmentation over time. Fully annotated cineloops were achieved in a semi-supervised manner, using pseudo-labeling from a network trained using limited annotations from the cardiac cycle. A temporal outlier removal method was developed to avoid artefact annotations. The study used $\mathbf{N}\boldsymbol{=174}$ recordings with A2C, A3C, and A4C views annotated at 7 frames, targeted at ES/ED and challenging cardiac cycle time points, with a testing set of $\mathbf{N}\boldsymbol{=25}$. We compared the performance of non-temporal U-Net segmentation trained with and without fully annotated cineloops, and by adding convLSTM layers in various configurations (encoder/decoder) to improve temporal consistency. Compared to the baseline U-Net trained at ES/ED, adding extra annotations targeted at time points with typical issues (e.g. valve opening), reduced outliers significantly and improved the average Dice. The fully automated pseudo-labeling exploited all frames, reduced outliers, and increased Dice to the same level as extra manual annotations. This approach also enabled the training of spatiotemporal networks. Adding convLSTM layers at each level in the encoder provided the best results.
利用超声心动图中的时间信息改进图像分割
超声心动图是基于评估影圈,其中时间信息对诊断很重要。在基于深度学习的图像分析中,由于需要大量的手工注释工作,这些信息很少被充分利用。在这项工作中,我们研究了在整个心脏周期中使用时间信息进行左心分割,既增强了简单网络的训练,也增强了时空神经网络的训练,以确保随着时间的推移分割的一致性。完全注释的电影循环以半监督的方式实现,使用来自使用来自心脏周期的有限注释训练的网络的伪标记。为了避免人工标注,提出了一种时间异常值去除方法。该研究使用$\mathbf{N}\boldsymbol{=174}$记录,A2C、A3C和A4C视图在7帧进行注释,针对ES/ED和挑战性心脏周期时间点,测试集为$\mathbf{N}\boldsymbol{=25}$。我们比较了非时序U-Net分割的性能,并在不同的配置(编码器/解码器)中添加了convLSTM层,以提高时序一致性。与ES/ED训练的基线U-Net相比,针对典型问题(如阀门开启)的时间点添加额外的注释,显著减少了异常值,提高了平均Dice。完全自动化的伪标签利用了所有帧,减少了异常值,并将Dice提高到与额外的手动注释相同的水平。这种方法还可以训练时空网络。在编码器的每个级别添加convLSTM层提供了最好的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信