基于深度学习的二维超声心动图区域图像质量评分。

IF 2.4 3区 医学 Q2 ACOUSTICS
Gilles Van De Vyver , Svein-Erik Måsøy , Håvard Dalen , Bjørnar Leangen Grenne , Espen Holte , Sindre Hellum Olaisen , John Nyberg , Andreas Østvik , Lasse Løvstakken , Erik Smistad
{"title":"基于深度学习的二维超声心动图区域图像质量评分。","authors":"Gilles Van De Vyver ,&nbsp;Svein-Erik Måsøy ,&nbsp;Håvard Dalen ,&nbsp;Bjørnar Leangen Grenne ,&nbsp;Espen Holte ,&nbsp;Sindre Hellum Olaisen ,&nbsp;John Nyberg ,&nbsp;Andreas Østvik ,&nbsp;Lasse Løvstakken ,&nbsp;Erik Smistad","doi":"10.1016/j.ultrasmedbio.2024.12.008","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><div>To develop and compare methods to automatically estimate regional ultrasound image quality for echocardiography separate from view correctness.</div></div><div><h3>Methods</h3><div>Three methods for estimating image quality were developed: (i) classic pixel-based metric: the generalized contrast-to-noise ratio (gCNR), computed on myocardial segments (region of interest) and left ventricle lumen (background), extracted by a U-Net segmentation model; (ii) local image coherence: the average local coherence as predicted by a U-Net model that predicts image coherence from B-mode ultrasound images at the pixel level; (iii) deep convolutional network: an end-to-end deep-learning model that predicts the quality of each region in the image directly. These methods were evaluated against manual regional quality annotations provided by three experienced cardiologists.</div></div><div><h3>Results</h3><div>The results indicated poor performance of the gCNR metric, with Spearman correlation to annotations of <em>ρ</em> = 0.24. The end-to-end learning model obtained the best result, <em>ρ</em> = 0.69, comparable to the inter-observer correlation, <em>ρ</em> = 0.63. Finally, the coherence-based method, with <em>ρ</em> = 0.58, out-performed the classical metrics and was more generic than the end-to-end approach.</div></div><div><h3>Conclusion</h3><div>The deep convolutional network provided the most accurate regional quality prediction, while the coherence-based method offered a more generalizable solution. gCNR showed limited effectiveness in this study. The image quality prediction tool is available as an open-source Python library at <span><span>https://github.com/GillesVanDeVyver/arqee</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49399,"journal":{"name":"Ultrasound in Medicine and Biology","volume":"51 4","pages":"Pages 638-649"},"PeriodicalIF":2.4000,"publicationDate":"2025-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Regional Image Quality Scoring for 2-D Echocardiography Using Deep Learning\",\"authors\":\"Gilles Van De Vyver ,&nbsp;Svein-Erik Måsøy ,&nbsp;Håvard Dalen ,&nbsp;Bjørnar Leangen Grenne ,&nbsp;Espen Holte ,&nbsp;Sindre Hellum Olaisen ,&nbsp;John Nyberg ,&nbsp;Andreas Østvik ,&nbsp;Lasse Løvstakken ,&nbsp;Erik Smistad\",\"doi\":\"10.1016/j.ultrasmedbio.2024.12.008\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Objective</h3><div>To develop and compare methods to automatically estimate regional ultrasound image quality for echocardiography separate from view correctness.</div></div><div><h3>Methods</h3><div>Three methods for estimating image quality were developed: (i) classic pixel-based metric: the generalized contrast-to-noise ratio (gCNR), computed on myocardial segments (region of interest) and left ventricle lumen (background), extracted by a U-Net segmentation model; (ii) local image coherence: the average local coherence as predicted by a U-Net model that predicts image coherence from B-mode ultrasound images at the pixel level; (iii) deep convolutional network: an end-to-end deep-learning model that predicts the quality of each region in the image directly. These methods were evaluated against manual regional quality annotations provided by three experienced cardiologists.</div></div><div><h3>Results</h3><div>The results indicated poor performance of the gCNR metric, with Spearman correlation to annotations of <em>ρ</em> = 0.24. The end-to-end learning model obtained the best result, <em>ρ</em> = 0.69, comparable to the inter-observer correlation, <em>ρ</em> = 0.63. Finally, the coherence-based method, with <em>ρ</em> = 0.58, out-performed the classical metrics and was more generic than the end-to-end approach.</div></div><div><h3>Conclusion</h3><div>The deep convolutional network provided the most accurate regional quality prediction, while the coherence-based method offered a more generalizable solution. gCNR showed limited effectiveness in this study. The image quality prediction tool is available as an open-source Python library at <span><span>https://github.com/GillesVanDeVyver/arqee</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":49399,\"journal\":{\"name\":\"Ultrasound in Medicine and Biology\",\"volume\":\"51 4\",\"pages\":\"Pages 638-649\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2025-01-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ultrasound in Medicine and Biology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0301562924004691\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ACOUSTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ultrasound in Medicine and Biology","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0301562924004691","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0

摘要

目的:研究和比较超声心动图区域超声图像质量的自动估计方法。方法:开发了三种图像质量估计方法:(i)经典的基于像素的度量:广义噪比(gCNR),在心肌段(感兴趣区域)和左心室腔(背景)上计算,通过U-Net分割模型提取;(ii)局部图像相干性:U-Net模型预测的平均局部相干性,该模型预测了b型超声图像在像素级的图像相干性;(iii)深度卷积网络:端到端深度学习模型,直接预测图像中每个区域的质量。根据三位经验丰富的心脏病专家提供的手工区域质量注释对这些方法进行评估。结果:结果表明gCNR度量的性能较差,与注释的Spearman相关性为ρ = 0.24。端到端学习模型获得了最好的结果,ρ = 0.69,与观察者间相关性相当,ρ = 0.63。最后,基于相干的方法(ρ = 0.58)优于经典度量,并且比端到端方法更通用。结论:深度卷积网络提供了最准确的区域质量预测,而基于相干的方法提供了更广泛的解决方案。gCNR在本研究中显示出有限的有效性。图像质量预测工具可以在https://github.com/GillesVanDeVyver/arqee上作为开源Python库获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Regional Image Quality Scoring for 2-D Echocardiography Using Deep Learning

Objective

To develop and compare methods to automatically estimate regional ultrasound image quality for echocardiography separate from view correctness.

Methods

Three methods for estimating image quality were developed: (i) classic pixel-based metric: the generalized contrast-to-noise ratio (gCNR), computed on myocardial segments (region of interest) and left ventricle lumen (background), extracted by a U-Net segmentation model; (ii) local image coherence: the average local coherence as predicted by a U-Net model that predicts image coherence from B-mode ultrasound images at the pixel level; (iii) deep convolutional network: an end-to-end deep-learning model that predicts the quality of each region in the image directly. These methods were evaluated against manual regional quality annotations provided by three experienced cardiologists.

Results

The results indicated poor performance of the gCNR metric, with Spearman correlation to annotations of ρ = 0.24. The end-to-end learning model obtained the best result, ρ = 0.69, comparable to the inter-observer correlation, ρ = 0.63. Finally, the coherence-based method, with ρ = 0.58, out-performed the classical metrics and was more generic than the end-to-end approach.

Conclusion

The deep convolutional network provided the most accurate regional quality prediction, while the coherence-based method offered a more generalizable solution. gCNR showed limited effectiveness in this study. The image quality prediction tool is available as an open-source Python library at https://github.com/GillesVanDeVyver/arqee.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.20
自引率
6.90%
发文量
325
审稿时长
70 days
期刊介绍: Ultrasound in Medicine and Biology is the official journal of the World Federation for Ultrasound in Medicine and Biology. The journal publishes original contributions that demonstrate a novel application of an existing ultrasound technology in clinical diagnostic, interventional and therapeutic applications, new and improved clinical techniques, the physics, engineering and technology of ultrasound in medicine and biology, and the interactions between ultrasound and biological systems, including bioeffects. Papers that simply utilize standard diagnostic ultrasound as a measuring tool will be considered out of scope. Extended critical reviews of subjects of contemporary interest in the field are also published, in addition to occasional editorial articles, clinical and technical notes, book reviews, letters to the editor and a calendar of forthcoming meetings. It is the aim of the journal fully to meet the information and publication requirements of the clinicians, scientists, engineers and other professionals who constitute the biomedical ultrasonic community.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信