Di Xiao , Hassan Nahas , Misaki Hiroshima , Aya Kishimoto , Alfred C.H. Yu
{"title":"训练模拟优化解耦:通过可微波束形成学习深度相关解耦,以减少算子调谐。","authors":"Di Xiao , Hassan Nahas , Misaki Hiroshima , Aya Kishimoto , Alfred C.H. Yu","doi":"10.1016/j.ultras.2025.107827","DOIUrl":null,"url":null,"abstract":"<div><div>Ultrasound is a point-of-care imaging modality that allows for real-time operation. While real-time capabilities are advantageous, one potential concern is the operator dependency of the modality as settings selected by the operator can alter the image appearance. Improvement of the B-mode image to necessitate fewer adjustments can reduce the operator dependency. Here, we propose a supervised learning framework (Optimal Apodizations with Training Simulations – OATS) to devise new apodization weights for image quality improvement. Our framework relies on the use of a differentiable beamformer to iteratively optimize apodization weights by comparing differences between simulated ground truth images and the corresponding post-beamformed images (simulated training set of over 200 images). We experimentally verified that these apodization weights resulted in higher quality B-mode images on both simulated and real-world data for focused and unfocused imaging scenarios. In the focused imaging scenario, the OATS-apodized images demonstrated reduced sidelobe artifact, improved lateral resolution (11 %), and improved signal equalization across depth when compared to a conventional Hanning apodization. In the unfocused imaging scenario, we observed reduced sidelobe artifacts and improved tissue-to-lesion contrast by up to 13 dB when compared against fixed F-number beamforming. Additionally, the OATS apodization weights were physically interpretable and learned to emulate image formation parameters such as time-gain compensation, F-number limited aperture, and transmit focus through the supervised learning procedure. Overall, the proposed framework successfully learned generalizable receive apodizations to improve image quality.</div></div>","PeriodicalId":23522,"journal":{"name":"Ultrasonics","volume":"159 ","pages":"Article 107827"},"PeriodicalIF":4.1000,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Optimized Apodizations with Training Simulations (OATS): Learned depth-dependent apodizations via differentiable beamforming for reduced operator tuning\",\"authors\":\"Di Xiao , Hassan Nahas , Misaki Hiroshima , Aya Kishimoto , Alfred C.H. Yu\",\"doi\":\"10.1016/j.ultras.2025.107827\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Ultrasound is a point-of-care imaging modality that allows for real-time operation. While real-time capabilities are advantageous, one potential concern is the operator dependency of the modality as settings selected by the operator can alter the image appearance. Improvement of the B-mode image to necessitate fewer adjustments can reduce the operator dependency. Here, we propose a supervised learning framework (Optimal Apodizations with Training Simulations – OATS) to devise new apodization weights for image quality improvement. Our framework relies on the use of a differentiable beamformer to iteratively optimize apodization weights by comparing differences between simulated ground truth images and the corresponding post-beamformed images (simulated training set of over 200 images). We experimentally verified that these apodization weights resulted in higher quality B-mode images on both simulated and real-world data for focused and unfocused imaging scenarios. In the focused imaging scenario, the OATS-apodized images demonstrated reduced sidelobe artifact, improved lateral resolution (11 %), and improved signal equalization across depth when compared to a conventional Hanning apodization. In the unfocused imaging scenario, we observed reduced sidelobe artifacts and improved tissue-to-lesion contrast by up to 13 dB when compared against fixed F-number beamforming. Additionally, the OATS apodization weights were physically interpretable and learned to emulate image formation parameters such as time-gain compensation, F-number limited aperture, and transmit focus through the supervised learning procedure. Overall, the proposed framework successfully learned generalizable receive apodizations to improve image quality.</div></div>\",\"PeriodicalId\":23522,\"journal\":{\"name\":\"Ultrasonics\",\"volume\":\"159 \",\"pages\":\"Article 107827\"},\"PeriodicalIF\":4.1000,\"publicationDate\":\"2025-09-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ultrasonics\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0041624X25002641\",\"RegionNum\":2,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ACOUSTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ultrasonics","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0041624X25002641","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ACOUSTICS","Score":null,"Total":0}
Optimized Apodizations with Training Simulations (OATS): Learned depth-dependent apodizations via differentiable beamforming for reduced operator tuning
Ultrasound is a point-of-care imaging modality that allows for real-time operation. While real-time capabilities are advantageous, one potential concern is the operator dependency of the modality as settings selected by the operator can alter the image appearance. Improvement of the B-mode image to necessitate fewer adjustments can reduce the operator dependency. Here, we propose a supervised learning framework (Optimal Apodizations with Training Simulations – OATS) to devise new apodization weights for image quality improvement. Our framework relies on the use of a differentiable beamformer to iteratively optimize apodization weights by comparing differences between simulated ground truth images and the corresponding post-beamformed images (simulated training set of over 200 images). We experimentally verified that these apodization weights resulted in higher quality B-mode images on both simulated and real-world data for focused and unfocused imaging scenarios. In the focused imaging scenario, the OATS-apodized images demonstrated reduced sidelobe artifact, improved lateral resolution (11 %), and improved signal equalization across depth when compared to a conventional Hanning apodization. In the unfocused imaging scenario, we observed reduced sidelobe artifacts and improved tissue-to-lesion contrast by up to 13 dB when compared against fixed F-number beamforming. Additionally, the OATS apodization weights were physically interpretable and learned to emulate image formation parameters such as time-gain compensation, F-number limited aperture, and transmit focus through the supervised learning procedure. Overall, the proposed framework successfully learned generalizable receive apodizations to improve image quality.
期刊介绍:
Ultrasonics is the only internationally established journal which covers the entire field of ultrasound research and technology and all its many applications. Ultrasonics contains a variety of sections to keep readers fully informed and up-to-date on the whole spectrum of research and development throughout the world. Ultrasonics publishes papers of exceptional quality and of relevance to both academia and industry. Manuscripts in which ultrasonics is a central issue and not simply an incidental tool or minor issue, are welcomed.
As well as top quality original research papers and review articles by world renowned experts, Ultrasonics also regularly features short communications, a calendar of forthcoming events and special issues dedicated to topical subjects.