利用深度学习对 OCT 中的体积斑点进行概率抑制

IF 2.9 2区 医学 Q2 BIOCHEMICAL RESEARCH METHODS
Bhaskara Rao Chintada, Sebastián Ruiz-Lopera, René Restrepo, Brett E. Bouma, Martin Villiger, Néstor Uribe-Patarroyo
{"title":"利用深度学习对 OCT 中的体积斑点进行概率抑制","authors":"Bhaskara Rao Chintada, Sebastián Ruiz-Lopera, René Restrepo, Brett E. Bouma, Martin Villiger, Néstor Uribe-Patarroyo","doi":"10.1364/boe.523716","DOIUrl":null,"url":null,"abstract":"We present a deep learning framework for volumetric speckle reduction in optical coherence tomography (OCT) based on a conditional generative adversarial network (cGAN) that leverages the volumetric nature of OCT data. In order to utilize the volumetric nature of OCT data, our network takes partial OCT volumes as input, resulting in artifact-free despeckled volumes that exhibit excellent speckle reduction and resolution preservation in all three dimensions. Furthermore, we address the ongoing challenge of generating ground truth data for supervised speckle suppression deep learning frameworks by using volumetric non-local means despeckling–TNode– to generate training data. We show that, while TNode processing is computationally demanding, it serves as a convenient, accessible gold-standard source for training data; our cGAN replicates efficient suppression of speckle while preserving tissue structures with dimensions approaching the system resolution of non-local means despeckling while being two orders of magnitude faster than TNode. We demonstrate fast, effective, and high-quality despeckling of the proposed network in different tissue types that are not part of the training. This was achieved with training data composed of just three OCT volumes and demonstrated in three different OCT systems. The open-source nature of our work facilitates re-training and deployment in any OCT system with an all-software implementation, working around the challenge of generating high-quality, speckle-free training data.","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":null,"pages":null},"PeriodicalIF":2.9000,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Probabilistic volumetric speckle suppression in OCT using deep learning\",\"authors\":\"Bhaskara Rao Chintada, Sebastián Ruiz-Lopera, René Restrepo, Brett E. Bouma, Martin Villiger, Néstor Uribe-Patarroyo\",\"doi\":\"10.1364/boe.523716\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a deep learning framework for volumetric speckle reduction in optical coherence tomography (OCT) based on a conditional generative adversarial network (cGAN) that leverages the volumetric nature of OCT data. In order to utilize the volumetric nature of OCT data, our network takes partial OCT volumes as input, resulting in artifact-free despeckled volumes that exhibit excellent speckle reduction and resolution preservation in all three dimensions. Furthermore, we address the ongoing challenge of generating ground truth data for supervised speckle suppression deep learning frameworks by using volumetric non-local means despeckling–TNode– to generate training data. We show that, while TNode processing is computationally demanding, it serves as a convenient, accessible gold-standard source for training data; our cGAN replicates efficient suppression of speckle while preserving tissue structures with dimensions approaching the system resolution of non-local means despeckling while being two orders of magnitude faster than TNode. We demonstrate fast, effective, and high-quality despeckling of the proposed network in different tissue types that are not part of the training. This was achieved with training data composed of just three OCT volumes and demonstrated in three different OCT systems. The open-source nature of our work facilitates re-training and deployment in any OCT system with an all-software implementation, working around the challenge of generating high-quality, speckle-free training data.\",\"PeriodicalId\":8969,\"journal\":{\"name\":\"Biomedical optics express\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-06-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomedical optics express\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1364/boe.523716\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"BIOCHEMICAL RESEARCH METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical optics express","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1364/boe.523716","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 0

摘要

我们基于条件生成对抗网络(cGAN),利用光学相干断层扫描(OCT)数据的体积特性,提出了一种用于减少光学相干断层扫描(OCT)中体积斑点的深度学习框架。为了利用光学相干断层扫描数据的体积特性,我们的网络将部分光学相干断层扫描体积作为输入,从而产生无伪影去斑体积,在所有三个维度上都表现出出色的斑点减少和分辨率保持能力。此外,我们通过使用体积非局部手段去斑--TNode 来生成训练数据,从而解决了为有监督的斑点抑制深度学习框架生成基本真实数据这一持续存在的挑战。我们的研究表明,虽然 TNode 处理对计算要求很高,但它是一种方便、可访问的黄金标准训练数据源;我们的 cGAN 在保留组织结构的同时复制了有效的斑点抑制,其维度接近非局部手段去斑的系统分辨率,速度比 TNode 快两个数量级。我们展示了所提出的网络在不同组织类型中快速、有效、高质量地去斑,而这些组织类型并不是训练的一部分。这是在由三个 OCT 体积组成的训练数据中实现的,并在三个不同的 OCT 系统中进行了演示。我们的工作具有开源性质,可以通过全软件实现在任何 OCT 系统中进行再训练和部署,解决了生成高质量无斑点训练数据的难题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Probabilistic volumetric speckle suppression in OCT using deep learning
We present a deep learning framework for volumetric speckle reduction in optical coherence tomography (OCT) based on a conditional generative adversarial network (cGAN) that leverages the volumetric nature of OCT data. In order to utilize the volumetric nature of OCT data, our network takes partial OCT volumes as input, resulting in artifact-free despeckled volumes that exhibit excellent speckle reduction and resolution preservation in all three dimensions. Furthermore, we address the ongoing challenge of generating ground truth data for supervised speckle suppression deep learning frameworks by using volumetric non-local means despeckling–TNode– to generate training data. We show that, while TNode processing is computationally demanding, it serves as a convenient, accessible gold-standard source for training data; our cGAN replicates efficient suppression of speckle while preserving tissue structures with dimensions approaching the system resolution of non-local means despeckling while being two orders of magnitude faster than TNode. We demonstrate fast, effective, and high-quality despeckling of the proposed network in different tissue types that are not part of the training. This was achieved with training data composed of just three OCT volumes and demonstrated in three different OCT systems. The open-source nature of our work facilitates re-training and deployment in any OCT system with an all-software implementation, working around the challenge of generating high-quality, speckle-free training data.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Biomedical optics express
Biomedical optics express BIOCHEMICAL RESEARCH METHODS-OPTICS
CiteScore
6.80
自引率
11.80%
发文量
633
审稿时长
1 months
期刊介绍: The journal''s scope encompasses fundamental research, technology development, biomedical studies and clinical applications. BOEx focuses on the leading edge topics in the field, including: Tissue optics and spectroscopy Novel microscopies Optical coherence tomography Diffuse and fluorescence tomography Photoacoustic and multimodal imaging Molecular imaging and therapies Nanophotonic biosensing Optical biophysics/photobiology Microfluidic optical devices Vision research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信