Olivier J.M. Stam , Kalloor Joseph Francis , Navchetan Awasthi
{"title":"PA OmniNet:一种用于鲁棒光声图像重建的无需再训练、可推广的深度学习框架","authors":"Olivier J.M. Stam , Kalloor Joseph Francis , Navchetan Awasthi","doi":"10.1016/j.pacs.2025.100740","DOIUrl":null,"url":null,"abstract":"<div><div>For clinical translation of photoacoustic imaging cost-effective systems development is necessary. One approach is the use of fewer transducer elements and acquisition channels combined with sparse sampling. However, this approach introduces reconstruction artifacts that degrade image quality. While deep learning models such as U-net have shown promise in reconstructing images from limited data, they typically require retraining for each new system configuration, a process that demands more data and increased computational resources. In this work, we introduce PA OmniNet, a modified U-net model designed to generalize across different system configurations without the need for retraining. Instead of retraining, PA OmniNet adapts to a new system using only a small set of example images (between 4 and 32), known as a context set. This context set conditions the model to effectively remove artifacts from new input images in various sparse sampling photoacoustic imaging applications. We evaluated PA OmniNet against a standard U-net using multiple datasets, including in vivo data from mouse and human subjects, synthetic data, and images captured at different wavelengths. PA OmniNet consistently outperformed the traditional U-net in generalization tasks, achieving average improvements of 8.3% in the Structural Similarity Index, a 11.6% reduction in Root Mean Square Error, and a 1.55 dB increase in Peak Signal-to-Noise Ratio. In 66% of our test cases, the generalized PA OmniNet even outperformed U-net models trained specifically on the new dataset. Code is available at <span><span>https://github.com/olivierstam4/PA_OmniNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":56025,"journal":{"name":"Photoacoustics","volume":"45 ","pages":"Article 100740"},"PeriodicalIF":7.1000,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PA OmniNet: A retraining-free, generalizable deep learning framework for robust photoacoustic image reconstruction\",\"authors\":\"Olivier J.M. Stam , Kalloor Joseph Francis , Navchetan Awasthi\",\"doi\":\"10.1016/j.pacs.2025.100740\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>For clinical translation of photoacoustic imaging cost-effective systems development is necessary. One approach is the use of fewer transducer elements and acquisition channels combined with sparse sampling. However, this approach introduces reconstruction artifacts that degrade image quality. While deep learning models such as U-net have shown promise in reconstructing images from limited data, they typically require retraining for each new system configuration, a process that demands more data and increased computational resources. In this work, we introduce PA OmniNet, a modified U-net model designed to generalize across different system configurations without the need for retraining. Instead of retraining, PA OmniNet adapts to a new system using only a small set of example images (between 4 and 32), known as a context set. This context set conditions the model to effectively remove artifacts from new input images in various sparse sampling photoacoustic imaging applications. We evaluated PA OmniNet against a standard U-net using multiple datasets, including in vivo data from mouse and human subjects, synthetic data, and images captured at different wavelengths. PA OmniNet consistently outperformed the traditional U-net in generalization tasks, achieving average improvements of 8.3% in the Structural Similarity Index, a 11.6% reduction in Root Mean Square Error, and a 1.55 dB increase in Peak Signal-to-Noise Ratio. In 66% of our test cases, the generalized PA OmniNet even outperformed U-net models trained specifically on the new dataset. Code is available at <span><span>https://github.com/olivierstam4/PA_OmniNet</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":56025,\"journal\":{\"name\":\"Photoacoustics\",\"volume\":\"45 \",\"pages\":\"Article 100740\"},\"PeriodicalIF\":7.1000,\"publicationDate\":\"2025-07-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Photoacoustics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2213597925000631\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Photoacoustics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2213597925000631","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
PA OmniNet: A retraining-free, generalizable deep learning framework for robust photoacoustic image reconstruction
For clinical translation of photoacoustic imaging cost-effective systems development is necessary. One approach is the use of fewer transducer elements and acquisition channels combined with sparse sampling. However, this approach introduces reconstruction artifacts that degrade image quality. While deep learning models such as U-net have shown promise in reconstructing images from limited data, they typically require retraining for each new system configuration, a process that demands more data and increased computational resources. In this work, we introduce PA OmniNet, a modified U-net model designed to generalize across different system configurations without the need for retraining. Instead of retraining, PA OmniNet adapts to a new system using only a small set of example images (between 4 and 32), known as a context set. This context set conditions the model to effectively remove artifacts from new input images in various sparse sampling photoacoustic imaging applications. We evaluated PA OmniNet against a standard U-net using multiple datasets, including in vivo data from mouse and human subjects, synthetic data, and images captured at different wavelengths. PA OmniNet consistently outperformed the traditional U-net in generalization tasks, achieving average improvements of 8.3% in the Structural Similarity Index, a 11.6% reduction in Root Mean Square Error, and a 1.55 dB increase in Peak Signal-to-Noise Ratio. In 66% of our test cases, the generalized PA OmniNet even outperformed U-net models trained specifically on the new dataset. Code is available at https://github.com/olivierstam4/PA_OmniNet.
PhotoacousticsPhysics and Astronomy-Atomic and Molecular Physics, and Optics
CiteScore
11.40
自引率
16.50%
发文量
96
审稿时长
53 days
期刊介绍:
The open access Photoacoustics journal (PACS) aims to publish original research and review contributions in the field of photoacoustics-optoacoustics-thermoacoustics. This field utilizes acoustical and ultrasonic phenomena excited by electromagnetic radiation for the detection, visualization, and characterization of various materials and biological tissues, including living organisms.
Recent advancements in laser technologies, ultrasound detection approaches, inverse theory, and fast reconstruction algorithms have greatly supported the rapid progress in this field. The unique contrast provided by molecular absorption in photoacoustic-optoacoustic-thermoacoustic methods has allowed for addressing unmet biological and medical needs such as pre-clinical research, clinical imaging of vasculature, tissue and disease physiology, drug efficacy, surgery guidance, and therapy monitoring.
Applications of this field encompass a wide range of medical imaging and sensing applications, including cancer, vascular diseases, brain neurophysiology, ophthalmology, and diabetes. Moreover, photoacoustics-optoacoustics-thermoacoustics is a multidisciplinary field, with contributions from chemistry and nanotechnology, where novel materials such as biodegradable nanoparticles, organic dyes, targeted agents, theranostic probes, and genetically expressed markers are being actively developed.
These advanced materials have significantly improved the signal-to-noise ratio and tissue contrast in photoacoustic methods.