利用深度神经网络自动评估经胸超声心动图图像质量

IF 4.4 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Robert B. Labs , Apostolos Vrettos , Jonathan Loo , Massoud Zolgharni
{"title":"利用深度神经网络自动评估经胸超声心动图图像质量","authors":"Robert B. Labs ,&nbsp;Apostolos Vrettos ,&nbsp;Jonathan Loo ,&nbsp;Massoud Zolgharni","doi":"10.1016/j.imed.2022.08.001","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><p>Standard views in two-dimensional echocardiography are well established but the qualities of acquired images are highly dependent on operator skills and are assessed subjectively. This study was aimed at providing an objective assessment pipeline for echocardiogram image quality by defining a new set of domain-specific quality indicators. Consequently, image quality assessment can thus be automated to enhance clinical measurements, interpretation, and real-time optimization.</p></div><div><h3>Methods</h3><p>We developed deep neural networks for the automated assessment of echocardiographic frames that were randomly sampled from 11,262 adult patients. The private echocardiography dataset consists of 33,784 frames, previously acquired between 2010 and 2020. Unlike non-medical images where full-reference metrics can be applied for image quality, echocardiogram's data are highly heterogeneous and requires blind-reference (IQA) metrics. Therefore, deep learning approaches were used to extract the spatiotemporal features and the image's quality indicators were evaluated against the mean absolute error. Our quality indicators encapsulate both anatomical and pathological elements to provide multivariate assessment scores for anatomical visibility, clarity, depth-gain and foreshortedness.</p></div><div><h3>Results</h3><p>The model performance accuracy yielded 94.4%, 96.8%, 96.2%, 97.4% for anatomical visibility, clarity, depth-gain and foreshortedness, respectively. The mean model error of 0.375±0.0052 with computational speed of 2.52 ms per frame (real-time performance) was achieved.</p></div><div><h3>Conclusion</h3><p>The novel approach offers new insight to the objective assessment of transthoracic echocardiogram image quality and clinical quantification in A4C and PLAX views. It also lays stronger foundations for the operator's guidance system which can leverage the learning curve for the acquisition of optimum quality images during the transthoracic examination.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"3 3","pages":"Pages 191-199"},"PeriodicalIF":4.4000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Automated assessment of transthoracic echocardiogram image quality using deep neural networks\",\"authors\":\"Robert B. Labs ,&nbsp;Apostolos Vrettos ,&nbsp;Jonathan Loo ,&nbsp;Massoud Zolgharni\",\"doi\":\"10.1016/j.imed.2022.08.001\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Background</h3><p>Standard views in two-dimensional echocardiography are well established but the qualities of acquired images are highly dependent on operator skills and are assessed subjectively. This study was aimed at providing an objective assessment pipeline for echocardiogram image quality by defining a new set of domain-specific quality indicators. Consequently, image quality assessment can thus be automated to enhance clinical measurements, interpretation, and real-time optimization.</p></div><div><h3>Methods</h3><p>We developed deep neural networks for the automated assessment of echocardiographic frames that were randomly sampled from 11,262 adult patients. The private echocardiography dataset consists of 33,784 frames, previously acquired between 2010 and 2020. Unlike non-medical images where full-reference metrics can be applied for image quality, echocardiogram's data are highly heterogeneous and requires blind-reference (IQA) metrics. Therefore, deep learning approaches were used to extract the spatiotemporal features and the image's quality indicators were evaluated against the mean absolute error. Our quality indicators encapsulate both anatomical and pathological elements to provide multivariate assessment scores for anatomical visibility, clarity, depth-gain and foreshortedness.</p></div><div><h3>Results</h3><p>The model performance accuracy yielded 94.4%, 96.8%, 96.2%, 97.4% for anatomical visibility, clarity, depth-gain and foreshortedness, respectively. The mean model error of 0.375±0.0052 with computational speed of 2.52 ms per frame (real-time performance) was achieved.</p></div><div><h3>Conclusion</h3><p>The novel approach offers new insight to the objective assessment of transthoracic echocardiogram image quality and clinical quantification in A4C and PLAX views. It also lays stronger foundations for the operator's guidance system which can leverage the learning curve for the acquisition of optimum quality images during the transthoracic examination.</p></div>\",\"PeriodicalId\":73400,\"journal\":{\"name\":\"Intelligent medicine\",\"volume\":\"3 3\",\"pages\":\"Pages 191-199\"},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2023-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Intelligent medicine\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2667102622000705\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent medicine","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667102622000705","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 2

摘要

背景二维超声心动图的标准视图已经建立,但采集图像的质量高度依赖于操作员的技能,并且是主观评估的。本研究旨在通过定义一组新的领域特定质量指标,为超声心动图图像质量提供一个客观的评估管道。因此,图像质量评估可以自动化,以增强临床测量、解释和实时优化。方法我们开发了深度神经网络,用于自动评估从11262名成年患者中随机抽取的超声心动图框架。私人超声心动图数据集由33784帧组成,之前在2010年至2020年间采集。与非医学图像不同,超声心动图的数据具有高度异质性,需要盲参考(IQA)指标。因此,使用深度学习方法提取时空特征,并根据平均绝对误差评估图像的质量指标。我们的质量指标涵盖了解剖和病理元素,为解剖可见性、清晰度、深度增益和缩短提供了多变量评估分数。结果模型在解剖可见度、清晰度、深度增益和缩短方面的准确率分别为94.4%、96.8%、96.2%和97.4%。平均模型误差为0.375±0.0052,计算速度为每帧2.52ms(实时性能)。结论该新方法为A4C和PLAX视图下经胸超声心动图图像质量和临床定量的客观评估提供了新的见解。它还为操作员的指导系统奠定了更坚实的基础,该系统可以利用学习曲线在经胸检查期间获取最佳质量的图像。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Automated assessment of transthoracic echocardiogram image quality using deep neural networks

Background

Standard views in two-dimensional echocardiography are well established but the qualities of acquired images are highly dependent on operator skills and are assessed subjectively. This study was aimed at providing an objective assessment pipeline for echocardiogram image quality by defining a new set of domain-specific quality indicators. Consequently, image quality assessment can thus be automated to enhance clinical measurements, interpretation, and real-time optimization.

Methods

We developed deep neural networks for the automated assessment of echocardiographic frames that were randomly sampled from 11,262 adult patients. The private echocardiography dataset consists of 33,784 frames, previously acquired between 2010 and 2020. Unlike non-medical images where full-reference metrics can be applied for image quality, echocardiogram's data are highly heterogeneous and requires blind-reference (IQA) metrics. Therefore, deep learning approaches were used to extract the spatiotemporal features and the image's quality indicators were evaluated against the mean absolute error. Our quality indicators encapsulate both anatomical and pathological elements to provide multivariate assessment scores for anatomical visibility, clarity, depth-gain and foreshortedness.

Results

The model performance accuracy yielded 94.4%, 96.8%, 96.2%, 97.4% for anatomical visibility, clarity, depth-gain and foreshortedness, respectively. The mean model error of 0.375±0.0052 with computational speed of 2.52 ms per frame (real-time performance) was achieved.

Conclusion

The novel approach offers new insight to the objective assessment of transthoracic echocardiogram image quality and clinical quantification in A4C and PLAX views. It also lays stronger foundations for the operator's guidance system which can leverage the learning curve for the acquisition of optimum quality images during the transthoracic examination.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Intelligent medicine
Intelligent medicine Surgery, Radiology and Imaging, Artificial Intelligence, Biomedical Engineering
CiteScore
5.20
自引率
0.00%
发文量
19
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信