训练模拟优化解耦:通过可微波束形成学习深度相关解耦,以减少算子调谐。

IF 4.1 2区 物理与天体物理 Q1 ACOUSTICS
Di Xiao , Hassan Nahas , Misaki Hiroshima , Aya Kishimoto , Alfred C.H. Yu
{"title":"训练模拟优化解耦:通过可微波束形成学习深度相关解耦,以减少算子调谐。","authors":"Di Xiao ,&nbsp;Hassan Nahas ,&nbsp;Misaki Hiroshima ,&nbsp;Aya Kishimoto ,&nbsp;Alfred C.H. Yu","doi":"10.1016/j.ultras.2025.107827","DOIUrl":null,"url":null,"abstract":"<div><div>Ultrasound is a point-of-care imaging modality that allows for real-time operation. While real-time capabilities are advantageous, one potential concern is the operator dependency of the modality as settings selected by the operator can alter the image appearance. Improvement of the B-mode image to necessitate fewer adjustments can reduce the operator dependency. Here, we propose a supervised learning framework (Optimal Apodizations with Training Simulations – OATS) to devise new apodization weights for image quality improvement. Our framework relies on the use of a differentiable beamformer to iteratively optimize apodization weights by comparing differences between simulated ground truth images and the corresponding post-beamformed images (simulated training set of over 200 images). We experimentally verified that these apodization weights resulted in higher quality B-mode images on both simulated and real-world data for focused and unfocused imaging scenarios. In the focused imaging scenario, the OATS-apodized images demonstrated reduced sidelobe artifact, improved lateral resolution (11 %), and improved signal equalization across depth when compared to a conventional Hanning apodization. In the unfocused imaging scenario, we observed reduced sidelobe artifacts and improved tissue-to-lesion contrast by up to 13 dB when compared against fixed F-number beamforming. Additionally, the OATS apodization weights were physically interpretable and learned to emulate image formation parameters such as time-gain compensation, F-number limited aperture, and transmit focus through the supervised learning procedure. Overall, the proposed framework successfully learned generalizable receive apodizations to improve image quality.</div></div>","PeriodicalId":23522,"journal":{"name":"Ultrasonics","volume":"159 ","pages":"Article 107827"},"PeriodicalIF":4.1000,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Optimized Apodizations with Training Simulations (OATS): Learned depth-dependent apodizations via differentiable beamforming for reduced operator tuning\",\"authors\":\"Di Xiao ,&nbsp;Hassan Nahas ,&nbsp;Misaki Hiroshima ,&nbsp;Aya Kishimoto ,&nbsp;Alfred C.H. Yu\",\"doi\":\"10.1016/j.ultras.2025.107827\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Ultrasound is a point-of-care imaging modality that allows for real-time operation. While real-time capabilities are advantageous, one potential concern is the operator dependency of the modality as settings selected by the operator can alter the image appearance. Improvement of the B-mode image to necessitate fewer adjustments can reduce the operator dependency. Here, we propose a supervised learning framework (Optimal Apodizations with Training Simulations – OATS) to devise new apodization weights for image quality improvement. Our framework relies on the use of a differentiable beamformer to iteratively optimize apodization weights by comparing differences between simulated ground truth images and the corresponding post-beamformed images (simulated training set of over 200 images). We experimentally verified that these apodization weights resulted in higher quality B-mode images on both simulated and real-world data for focused and unfocused imaging scenarios. In the focused imaging scenario, the OATS-apodized images demonstrated reduced sidelobe artifact, improved lateral resolution (11 %), and improved signal equalization across depth when compared to a conventional Hanning apodization. In the unfocused imaging scenario, we observed reduced sidelobe artifacts and improved tissue-to-lesion contrast by up to 13 dB when compared against fixed F-number beamforming. Additionally, the OATS apodization weights were physically interpretable and learned to emulate image formation parameters such as time-gain compensation, F-number limited aperture, and transmit focus through the supervised learning procedure. Overall, the proposed framework successfully learned generalizable receive apodizations to improve image quality.</div></div>\",\"PeriodicalId\":23522,\"journal\":{\"name\":\"Ultrasonics\",\"volume\":\"159 \",\"pages\":\"Article 107827\"},\"PeriodicalIF\":4.1000,\"publicationDate\":\"2025-09-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ultrasonics\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0041624X25002641\",\"RegionNum\":2,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ACOUSTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ultrasonics","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0041624X25002641","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0

摘要

超声是一种即时成像方式,允许实时操作。虽然实时功能是有利的,但一个潜在的问题是操作员对模式的依赖,因为操作员选择的设置可能会改变图像的外观。改进b模式图像以减少调整,可以减少对操作员的依赖。在这里,我们提出了一个监督学习框架(训练模拟的最优消去-燕麦)来设计新的消去权值,以提高图像质量。我们的框架依赖于可微波束形成器的使用,通过比较模拟的地面真实图像和相应的波束形成后图像(超过200张图像的模拟训练集)之间的差异来迭代优化apodization权重。我们通过实验验证,在聚焦和非聚焦成像场景下,这些apodization权重在模拟和现实数据上都能产生更高质量的b模式图像。在聚焦成像场景中,与传统的Hanning apozed相比,oats apozed图像显示出减少了副瓣伪影,提高了横向分辨率(11%),并改善了跨深度的信号均衡性。在无聚焦成像情况下,与固定f数波束形成相比,我们观察到旁瓣伪影减少,组织-病变对比度提高了13 dB。此外,通过监督学习过程,燕麦化权是物理可解释的,并学习模拟图像形成参数,如时间增益补偿、f数限制光圈和传输焦点。总的来说,该框架成功地学习了可泛化的接收方法,提高了图像质量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Optimized Apodizations with Training Simulations (OATS): Learned depth-dependent apodizations via differentiable beamforming for reduced operator tuning
Ultrasound is a point-of-care imaging modality that allows for real-time operation. While real-time capabilities are advantageous, one potential concern is the operator dependency of the modality as settings selected by the operator can alter the image appearance. Improvement of the B-mode image to necessitate fewer adjustments can reduce the operator dependency. Here, we propose a supervised learning framework (Optimal Apodizations with Training Simulations – OATS) to devise new apodization weights for image quality improvement. Our framework relies on the use of a differentiable beamformer to iteratively optimize apodization weights by comparing differences between simulated ground truth images and the corresponding post-beamformed images (simulated training set of over 200 images). We experimentally verified that these apodization weights resulted in higher quality B-mode images on both simulated and real-world data for focused and unfocused imaging scenarios. In the focused imaging scenario, the OATS-apodized images demonstrated reduced sidelobe artifact, improved lateral resolution (11 %), and improved signal equalization across depth when compared to a conventional Hanning apodization. In the unfocused imaging scenario, we observed reduced sidelobe artifacts and improved tissue-to-lesion contrast by up to 13 dB when compared against fixed F-number beamforming. Additionally, the OATS apodization weights were physically interpretable and learned to emulate image formation parameters such as time-gain compensation, F-number limited aperture, and transmit focus through the supervised learning procedure. Overall, the proposed framework successfully learned generalizable receive apodizations to improve image quality.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Ultrasonics
Ultrasonics 医学-核医学
CiteScore
7.60
自引率
19.00%
发文量
186
审稿时长
3.9 months
期刊介绍: Ultrasonics is the only internationally established journal which covers the entire field of ultrasound research and technology and all its many applications. Ultrasonics contains a variety of sections to keep readers fully informed and up-to-date on the whole spectrum of research and development throughout the world. Ultrasonics publishes papers of exceptional quality and of relevance to both academia and industry. Manuscripts in which ultrasonics is a central issue and not simply an incidental tool or minor issue, are welcomed. As well as top quality original research papers and review articles by world renowned experts, Ultrasonics also regularly features short communications, a calendar of forthcoming events and special issues dedicated to topical subjects.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信