PA OmniNet:一种用于鲁棒光声图像重建的无需再训练、可推广的深度学习框架

IF 7.1 1区 医学 Q1 ENGINEERING, BIOMEDICAL
Olivier J.M. Stam , Kalloor Joseph Francis , Navchetan Awasthi
{"title":"PA OmniNet:一种用于鲁棒光声图像重建的无需再训练、可推广的深度学习框架","authors":"Olivier J.M. Stam ,&nbsp;Kalloor Joseph Francis ,&nbsp;Navchetan Awasthi","doi":"10.1016/j.pacs.2025.100740","DOIUrl":null,"url":null,"abstract":"<div><div>For clinical translation of photoacoustic imaging cost-effective systems development is necessary. One approach is the use of fewer transducer elements and acquisition channels combined with sparse sampling. However, this approach introduces reconstruction artifacts that degrade image quality. While deep learning models such as U-net have shown promise in reconstructing images from limited data, they typically require retraining for each new system configuration, a process that demands more data and increased computational resources. In this work, we introduce PA OmniNet, a modified U-net model designed to generalize across different system configurations without the need for retraining. Instead of retraining, PA OmniNet adapts to a new system using only a small set of example images (between 4 and 32), known as a context set. This context set conditions the model to effectively remove artifacts from new input images in various sparse sampling photoacoustic imaging applications. We evaluated PA OmniNet against a standard U-net using multiple datasets, including in vivo data from mouse and human subjects, synthetic data, and images captured at different wavelengths. PA OmniNet consistently outperformed the traditional U-net in generalization tasks, achieving average improvements of 8.3% in the Structural Similarity Index, a 11.6% reduction in Root Mean Square Error, and a 1.55 dB increase in Peak Signal-to-Noise Ratio. In 66% of our test cases, the generalized PA OmniNet even outperformed U-net models trained specifically on the new dataset. Code is available at <span><span>https://github.com/olivierstam4/PA_OmniNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":56025,"journal":{"name":"Photoacoustics","volume":"45 ","pages":"Article 100740"},"PeriodicalIF":7.1000,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PA OmniNet: A retraining-free, generalizable deep learning framework for robust photoacoustic image reconstruction\",\"authors\":\"Olivier J.M. Stam ,&nbsp;Kalloor Joseph Francis ,&nbsp;Navchetan Awasthi\",\"doi\":\"10.1016/j.pacs.2025.100740\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>For clinical translation of photoacoustic imaging cost-effective systems development is necessary. One approach is the use of fewer transducer elements and acquisition channels combined with sparse sampling. However, this approach introduces reconstruction artifacts that degrade image quality. While deep learning models such as U-net have shown promise in reconstructing images from limited data, they typically require retraining for each new system configuration, a process that demands more data and increased computational resources. In this work, we introduce PA OmniNet, a modified U-net model designed to generalize across different system configurations without the need for retraining. Instead of retraining, PA OmniNet adapts to a new system using only a small set of example images (between 4 and 32), known as a context set. This context set conditions the model to effectively remove artifacts from new input images in various sparse sampling photoacoustic imaging applications. We evaluated PA OmniNet against a standard U-net using multiple datasets, including in vivo data from mouse and human subjects, synthetic data, and images captured at different wavelengths. PA OmniNet consistently outperformed the traditional U-net in generalization tasks, achieving average improvements of 8.3% in the Structural Similarity Index, a 11.6% reduction in Root Mean Square Error, and a 1.55 dB increase in Peak Signal-to-Noise Ratio. In 66% of our test cases, the generalized PA OmniNet even outperformed U-net models trained specifically on the new dataset. Code is available at <span><span>https://github.com/olivierstam4/PA_OmniNet</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":56025,\"journal\":{\"name\":\"Photoacoustics\",\"volume\":\"45 \",\"pages\":\"Article 100740\"},\"PeriodicalIF\":7.1000,\"publicationDate\":\"2025-07-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Photoacoustics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2213597925000631\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Photoacoustics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2213597925000631","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

摘要

对于光声成像的临床翻译,开发具有成本效益的系统是必要的。一种方法是使用较少的传感器元件和采集通道,并结合稀疏采样。然而,这种方法引入了降低图像质量的重建伪影。虽然像U-net这样的深度学习模型在从有限的数据中重建图像方面显示出了希望,但它们通常需要对每个新的系统配置进行重新训练,这一过程需要更多的数据和更多的计算资源。在这项工作中,我们引入了PA OmniNet,这是一种改进的U-net模型,旨在跨不同的系统配置进行泛化,而无需再训练。与重新训练不同,PA OmniNet只使用一小组示例图像(在4到32之间)来适应新系统,称为上下文集。这一背景设置了条件,使模型能够有效地从各种稀疏采样光声成像应用的新输入图像中去除伪影。我们使用多种数据集对PA OmniNet与标准U-net进行了评估,包括来自小鼠和人类受试者的体内数据、合成数据和不同波长捕获的图像。PA OmniNet在泛化任务中始终优于传统的U-net,结构相似性指数平均提高8.3%,均方根误差降低11.6%,峰值信噪比提高1.55 dB。在我们66%的测试用例中,广义PA OmniNet甚至优于专门在新数据集上训练的U-net模型。代码可从https://github.com/olivierstam4/PA_OmniNet获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
PA OmniNet: A retraining-free, generalizable deep learning framework for robust photoacoustic image reconstruction
For clinical translation of photoacoustic imaging cost-effective systems development is necessary. One approach is the use of fewer transducer elements and acquisition channels combined with sparse sampling. However, this approach introduces reconstruction artifacts that degrade image quality. While deep learning models such as U-net have shown promise in reconstructing images from limited data, they typically require retraining for each new system configuration, a process that demands more data and increased computational resources. In this work, we introduce PA OmniNet, a modified U-net model designed to generalize across different system configurations without the need for retraining. Instead of retraining, PA OmniNet adapts to a new system using only a small set of example images (between 4 and 32), known as a context set. This context set conditions the model to effectively remove artifacts from new input images in various sparse sampling photoacoustic imaging applications. We evaluated PA OmniNet against a standard U-net using multiple datasets, including in vivo data from mouse and human subjects, synthetic data, and images captured at different wavelengths. PA OmniNet consistently outperformed the traditional U-net in generalization tasks, achieving average improvements of 8.3% in the Structural Similarity Index, a 11.6% reduction in Root Mean Square Error, and a 1.55 dB increase in Peak Signal-to-Noise Ratio. In 66% of our test cases, the generalized PA OmniNet even outperformed U-net models trained specifically on the new dataset. Code is available at https://github.com/olivierstam4/PA_OmniNet.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Photoacoustics
Photoacoustics Physics and Astronomy-Atomic and Molecular Physics, and Optics
CiteScore
11.40
自引率
16.50%
发文量
96
审稿时长
53 days
期刊介绍: The open access Photoacoustics journal (PACS) aims to publish original research and review contributions in the field of photoacoustics-optoacoustics-thermoacoustics. This field utilizes acoustical and ultrasonic phenomena excited by electromagnetic radiation for the detection, visualization, and characterization of various materials and biological tissues, including living organisms. Recent advancements in laser technologies, ultrasound detection approaches, inverse theory, and fast reconstruction algorithms have greatly supported the rapid progress in this field. The unique contrast provided by molecular absorption in photoacoustic-optoacoustic-thermoacoustic methods has allowed for addressing unmet biological and medical needs such as pre-clinical research, clinical imaging of vasculature, tissue and disease physiology, drug efficacy, surgery guidance, and therapy monitoring. Applications of this field encompass a wide range of medical imaging and sensing applications, including cancer, vascular diseases, brain neurophysiology, ophthalmology, and diabetes. Moreover, photoacoustics-optoacoustics-thermoacoustics is a multidisciplinary field, with contributions from chemistry and nanotechnology, where novel materials such as biodegradable nanoparticles, organic dyes, targeted agents, theranostic probes, and genetically expressed markers are being actively developed. These advanced materials have significantly improved the signal-to-noise ratio and tissue contrast in photoacoustic methods.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信