Practical volumetric speckle reduction in OCT using deep learning

B. R. Chintada, Sebastián Ruiz-Lopera, R. Restrepo, M. Villiger, B. Bouma, N. Uribe-Patarroyo
{"title":"Practical volumetric speckle reduction in OCT using deep learning","authors":"B. R. Chintada, Sebastián Ruiz-Lopera, R. Restrepo, M. Villiger, B. Bouma, N. Uribe-Patarroyo","doi":"10.1117/12.2670781","DOIUrl":null,"url":null,"abstract":"Speckle reduction has been an active topic of interest in the Optical Coherence Tomography (OCT) community and several techniques have been developed ranging from hardware-based methods, conventional image-processing and deep-learning based methods. The main goal of speckle reduction is to improve the diagnostic utility of OCT images by enhancing the image quality, thereby enhancing the visual interpretation of anatomical structures. We have previously introduced a probabilistic despeckling method based on non-local means for OCT—Tomographic Non-local-means despeckling (TNode). We demonstrated that this method efficiently suppresses speckle contrast while preserving tissue structures with dimensions approaching the system resolution. Despite the merits of this method, it is computationally very expensive: processing a typical retinal OCT volume takes a few hours. A much faster version of TNode with close to real-time performance, while keeping with the open source nature of TNode, could find much greater use in the OCT community. Deep learning despeckling methods have been proposed in OCT, including variants of conditional Generative Adversarial Networks (cGAN) and convolutional neural networks CNN. However, most of these methods have used B-scan compounding as a ground truth, which presents significant limitations in terms of speckle reduced tomograms with preservation of resolution. In addition, all these methods have focused on speckle suppression of individual B-scans, and their performance on volumetric tomograms is unclear: the expectation is that three-dimensional manipulations of these processed tomograms (i.e., en face projections) will contain artifacts due to the B-scan-wise processing, disrupting the continuity of tissue structures along the slow-scan axis. In addition, speckle suppression based on individual B-scans cannot provide the neural network with information on volumetric structures in the training data, and thus is expected to perform poorly on small structures. Indeed, most deep-learning despeckling works have focused on image quality metrics based on demonstrating strong speckle suppression, rather than focusing on preservation of contrast and small tissue structures. To overcome these problems, we propose an entire workflow to enable the wide-spread use of deep-learning speckle suppression in OCT: the ground-truth is generated using volumetric TNode despeckling, and the neural network uses a new cGAN that receives OCT partial volumes as inputs to utilize the three-dimensional structural information for speckle reduction. Because of its reliance on TNode for generating ground-truth data, this hybrid deep-learning–TNode (DL-TNode) framework will be made available to the OCT community to enable easy training and implementation in a multitude of OCT systems without relying on specialty-acquired training data.","PeriodicalId":278089,"journal":{"name":"European Conference on Biomedical Optics","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Conference on Biomedical Optics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2670781","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Speckle reduction has been an active topic of interest in the Optical Coherence Tomography (OCT) community and several techniques have been developed ranging from hardware-based methods, conventional image-processing and deep-learning based methods. The main goal of speckle reduction is to improve the diagnostic utility of OCT images by enhancing the image quality, thereby enhancing the visual interpretation of anatomical structures. We have previously introduced a probabilistic despeckling method based on non-local means for OCT—Tomographic Non-local-means despeckling (TNode). We demonstrated that this method efficiently suppresses speckle contrast while preserving tissue structures with dimensions approaching the system resolution. Despite the merits of this method, it is computationally very expensive: processing a typical retinal OCT volume takes a few hours. A much faster version of TNode with close to real-time performance, while keeping with the open source nature of TNode, could find much greater use in the OCT community. Deep learning despeckling methods have been proposed in OCT, including variants of conditional Generative Adversarial Networks (cGAN) and convolutional neural networks CNN. However, most of these methods have used B-scan compounding as a ground truth, which presents significant limitations in terms of speckle reduced tomograms with preservation of resolution. In addition, all these methods have focused on speckle suppression of individual B-scans, and their performance on volumetric tomograms is unclear: the expectation is that three-dimensional manipulations of these processed tomograms (i.e., en face projections) will contain artifacts due to the B-scan-wise processing, disrupting the continuity of tissue structures along the slow-scan axis. In addition, speckle suppression based on individual B-scans cannot provide the neural network with information on volumetric structures in the training data, and thus is expected to perform poorly on small structures. Indeed, most deep-learning despeckling works have focused on image quality metrics based on demonstrating strong speckle suppression, rather than focusing on preservation of contrast and small tissue structures. To overcome these problems, we propose an entire workflow to enable the wide-spread use of deep-learning speckle suppression in OCT: the ground-truth is generated using volumetric TNode despeckling, and the neural network uses a new cGAN that receives OCT partial volumes as inputs to utilize the three-dimensional structural information for speckle reduction. Because of its reliance on TNode for generating ground-truth data, this hybrid deep-learning–TNode (DL-TNode) framework will be made available to the OCT community to enable easy training and implementation in a multitude of OCT systems without relying on specialty-acquired training data.
实际体积散斑减少OCT使用深度学习
斑点减少一直是光学相干层析成像(OCT)社区感兴趣的一个活跃话题,已经开发了几种技术,包括基于硬件的方法,传统的图像处理和基于深度学习的方法。斑点减少的主要目标是通过提高图像质量来提高OCT图像的诊断效用,从而增强对解剖结构的视觉解释。我们之前提出了一种基于非局部均值的oct层析非局部均值去斑(TNode)概率去斑方法。我们证明了这种方法有效地抑制散斑对比度,同时保持组织结构的尺寸接近系统分辨率。尽管这种方法有优点,但它在计算上非常昂贵:处理一个典型的视网膜OCT体积需要几个小时。一个速度更快、接近实时性能的TNode版本,同时保持了TNode的开源特性,可能会在OCT社区中得到更大的应用。在OCT中已经提出了深度学习去噪方法,包括条件生成对抗网络(cGAN)和卷积神经网络CNN的变体。然而,这些方法中的大多数都使用b扫描复合作为基础真理,这在保留分辨率的散斑减少层析图方面存在显着限制。此外,所有这些方法都集中在单个b扫描的斑点抑制上,它们在体积层析图上的表现尚不清楚:期望这些处理过的层析图的三维操作(即,正面投影)将包含由于b扫描处理而产生的伪影,破坏沿慢扫描轴的组织结构的连续性。此外,基于单个b扫描的散斑抑制不能为神经网络提供训练数据中体积结构的信息,因此预计在小结构上表现不佳。事实上,大多数深度学习去斑工作都集中在基于强斑点抑制的图像质量指标上,而不是集中在对比度和小组织结构的保存上。为了克服这些问题,我们提出了一个完整的工作流程,以便在OCT中广泛使用深度学习散斑抑制:使用体积TNode去斑生成基真值,神经网络使用新的cGAN接收OCT部分体积作为输入,以利用三维结构信息进行散斑减少。由于它依赖于TNode来生成真实数据,这种混合深度学习-TNode (DL-TNode)框架将提供给OCT社区,以便在众多OCT系统中轻松训练和实施,而无需依赖于专门获取的训练数据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信