AMeta-FD:基于对抗性元学习的少镜头视网膜OCT图像去斑

IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL
Yi Zhou , Tao Peng , Thiara Sana Ahmed , Fei Shi , Weifang Zhu , Dehui Xiang , Leopold Schmetterer , Jianxin Jiang , Bingyao Tan , Xinjian Chen
{"title":"AMeta-FD:基于对抗性元学习的少镜头视网膜OCT图像去斑","authors":"Yi Zhou ,&nbsp;Tao Peng ,&nbsp;Thiara Sana Ahmed ,&nbsp;Fei Shi ,&nbsp;Weifang Zhu ,&nbsp;Dehui Xiang ,&nbsp;Leopold Schmetterer ,&nbsp;Jianxin Jiang ,&nbsp;Bingyao Tan ,&nbsp;Xinjian Chen","doi":"10.1016/j.compmedimag.2025.102597","DOIUrl":null,"url":null,"abstract":"<div><div>Speckle noise in Optical coherence tomography (OCT) images compromises the performance of image analysis tasks such as retinal layer boundary detection. Deep learning algorithms have demonstrated the advantage of being more cost-effective and robust compared to hardware solutions and conventional image processing algorithms. However, these methods usually require large training datasets which is time-consuming to acquire. This paper proposes a novel method called <strong>A</strong>dversarial <strong>Meta</strong>-learning for <strong>F</strong>ew-shot raw retinal OCT image <strong>D</strong>especkling (<strong>AMeta-FD</strong>) to reduce speckle noise in OCT images. Our method involves two training phases: (1) adversarial meta-training on synthetic noisy OCT image pairs, and (2) fine-tuning with a small set of raw-clean image pairs containing speckle noise. Additionally, we introduce a new suppression loss to reduce the contribution of non-tissue pixels effectively. The ground truth involved in this study is generated by registering and averaging multiple repeated images. AMeta-FD requires only 60 raw-clean image pairs, which constitute about 12% of whole training dataset, yet it achieves performance on par with traditional transfer training that utilize the entire training dataset. Extensive evaluations show that in terms of signal-to-noise ratio (SNR), AMeta-FD surpasses traditional non-learning-based despeckling methods by at least 15 <span><math><mi>dB</mi></math></span>. It also outperforms the recent meta-learning-based image denoising method, Few-Shot Meta-Denoising (FSMD), by 11.01 <span><math><mi>dB</mi></math></span>, and exceeds our previous best method by 3 <span><math><mi>dB</mi></math></span>. The code for AMeta-FD is available at <span><span>https://github.com/Zhouyi-Zura/AMeta-FD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102597"},"PeriodicalIF":5.4000,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AMeta-FD: Adversarial Meta-learning for Few-shot retinal OCT image Despeckling\",\"authors\":\"Yi Zhou ,&nbsp;Tao Peng ,&nbsp;Thiara Sana Ahmed ,&nbsp;Fei Shi ,&nbsp;Weifang Zhu ,&nbsp;Dehui Xiang ,&nbsp;Leopold Schmetterer ,&nbsp;Jianxin Jiang ,&nbsp;Bingyao Tan ,&nbsp;Xinjian Chen\",\"doi\":\"10.1016/j.compmedimag.2025.102597\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Speckle noise in Optical coherence tomography (OCT) images compromises the performance of image analysis tasks such as retinal layer boundary detection. Deep learning algorithms have demonstrated the advantage of being more cost-effective and robust compared to hardware solutions and conventional image processing algorithms. However, these methods usually require large training datasets which is time-consuming to acquire. This paper proposes a novel method called <strong>A</strong>dversarial <strong>Meta</strong>-learning for <strong>F</strong>ew-shot raw retinal OCT image <strong>D</strong>especkling (<strong>AMeta-FD</strong>) to reduce speckle noise in OCT images. Our method involves two training phases: (1) adversarial meta-training on synthetic noisy OCT image pairs, and (2) fine-tuning with a small set of raw-clean image pairs containing speckle noise. Additionally, we introduce a new suppression loss to reduce the contribution of non-tissue pixels effectively. The ground truth involved in this study is generated by registering and averaging multiple repeated images. AMeta-FD requires only 60 raw-clean image pairs, which constitute about 12% of whole training dataset, yet it achieves performance on par with traditional transfer training that utilize the entire training dataset. Extensive evaluations show that in terms of signal-to-noise ratio (SNR), AMeta-FD surpasses traditional non-learning-based despeckling methods by at least 15 <span><math><mi>dB</mi></math></span>. It also outperforms the recent meta-learning-based image denoising method, Few-Shot Meta-Denoising (FSMD), by 11.01 <span><math><mi>dB</mi></math></span>, and exceeds our previous best method by 3 <span><math><mi>dB</mi></math></span>. The code for AMeta-FD is available at <span><span>https://github.com/Zhouyi-Zura/AMeta-FD</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50631,\"journal\":{\"name\":\"Computerized Medical Imaging and Graphics\",\"volume\":\"124 \",\"pages\":\"Article 102597\"},\"PeriodicalIF\":5.4000,\"publicationDate\":\"2025-07-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computerized Medical Imaging and Graphics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0895611125001065\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611125001065","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

摘要

光学相干断层扫描(OCT)图像中的散斑噪声会影响图像分析任务的性能,如视网膜层边界检测。与硬件解决方案和传统图像处理算法相比,深度学习算法已经证明了更具成本效益和鲁棒性的优势。然而,这些方法通常需要大量的训练数据集,这是耗时的。本文提出了一种新的对抗元学习方法,用于少拍原始视网膜OCT图像去斑(AMeta-FD),以降低OCT图像中的斑点噪声。我们的方法包括两个训练阶段:(1)对合成噪声OCT图像对进行对抗性元训练,(2)对一小组包含散斑噪声的原始图像对进行微调。此外,我们引入了一种新的抑制损失来有效地降低非组织像素的贡献。本研究中涉及的地面真相是通过注册和平均多个重复图像生成的。AMeta-FD只需要60对原始图像对,约占整个训练数据集的12%,但它的性能与利用整个训练数据集的传统迁移训练相当。广泛的评估表明,在信噪比(SNR)方面,AMeta-FD比传统的非学习去斑方法至少高出15 dB。它还比最近基于元学习的图像去噪方法Few-Shot Meta-Denoising (FSMD)高出11.01 dB,比我们之前的最佳方法高出3 dB。AMeta-FD的代码可在https://github.com/Zhouyi-Zura/AMeta-FD上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

AMeta-FD: Adversarial Meta-learning for Few-shot retinal OCT image Despeckling

AMeta-FD: Adversarial Meta-learning for Few-shot retinal OCT image Despeckling
Speckle noise in Optical coherence tomography (OCT) images compromises the performance of image analysis tasks such as retinal layer boundary detection. Deep learning algorithms have demonstrated the advantage of being more cost-effective and robust compared to hardware solutions and conventional image processing algorithms. However, these methods usually require large training datasets which is time-consuming to acquire. This paper proposes a novel method called Adversarial Meta-learning for Few-shot raw retinal OCT image Despeckling (AMeta-FD) to reduce speckle noise in OCT images. Our method involves two training phases: (1) adversarial meta-training on synthetic noisy OCT image pairs, and (2) fine-tuning with a small set of raw-clean image pairs containing speckle noise. Additionally, we introduce a new suppression loss to reduce the contribution of non-tissue pixels effectively. The ground truth involved in this study is generated by registering and averaging multiple repeated images. AMeta-FD requires only 60 raw-clean image pairs, which constitute about 12% of whole training dataset, yet it achieves performance on par with traditional transfer training that utilize the entire training dataset. Extensive evaluations show that in terms of signal-to-noise ratio (SNR), AMeta-FD surpasses traditional non-learning-based despeckling methods by at least 15 dB. It also outperforms the recent meta-learning-based image denoising method, Few-Shot Meta-Denoising (FSMD), by 11.01 dB, and exceeds our previous best method by 3 dB. The code for AMeta-FD is available at https://github.com/Zhouyi-Zura/AMeta-FD.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
10.70
自引率
3.50%
发文量
71
审稿时长
26 days
期刊介绍: The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信