Retinal OCT Denoising with Pseudo-Multimodal Fusion Network.

Dewei Hu, Joseph D Malone, Yigit Atay, Yuankai K Tao, Ipek Oguz
{"title":"Retinal OCT Denoising with Pseudo-Multimodal Fusion Network.","authors":"Dewei Hu,&nbsp;Joseph D Malone,&nbsp;Yigit Atay,&nbsp;Yuankai K Tao,&nbsp;Ipek Oguz","doi":"10.1007/978-3-030-63419-3_13","DOIUrl":null,"url":null,"abstract":"<p><p>Optical coherence tomography (OCT) is a prevalent imaging technique for retina. However, it is affected by multiplicative speckle noise that can degrade the visibility of essential anatomical structures, including blood vessels and tissue layers. Although averaging repeated B-scan frames can significantly improve the signal-to-noise-ratio (SNR), this requires longer acquisition time, which can introduce motion artifacts and cause discomfort to patients. In this study, we propose a learning-based method that exploits information from the single-frame noisy B-scan and a pseudo-modality that is created with the aid of the self-fusion method. The pseudo-modality provides good SNR for layers that are barely perceptible in the noisy B-scan but can over-smooth fine features such as small vessels. By using a fusion network, desired features from each modality can be combined, and the weight of their contribution is adjustable. Evaluated by intensity-based and structural metrics, the result shows that our method can effectively suppress the speckle noise and enhance the contrast between retina layers while the overall structure and small blood vessels are preserved. Compared to the single modality network, our method improves the structural similarity with low noise B-scan from 0.559 ± 0.033 to 0.576 ± 0.031.</p>","PeriodicalId":93803,"journal":{"name":"Ophthalmic medical image analysis : 7th International Workshop, OMIA 2020, held in conjunction with MICCAI 2020, Lima, Peru, October 8, 2020, proceedings","volume":"12069 ","pages":"125-135"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9241435/pdf/nihms-1752651.pdf","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ophthalmic medical image analysis : 7th International Workshop, OMIA 2020, held in conjunction with MICCAI 2020, Lima, Peru, October 8, 2020, proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-030-63419-3_13","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2020/11/20 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Optical coherence tomography (OCT) is a prevalent imaging technique for retina. However, it is affected by multiplicative speckle noise that can degrade the visibility of essential anatomical structures, including blood vessels and tissue layers. Although averaging repeated B-scan frames can significantly improve the signal-to-noise-ratio (SNR), this requires longer acquisition time, which can introduce motion artifacts and cause discomfort to patients. In this study, we propose a learning-based method that exploits information from the single-frame noisy B-scan and a pseudo-modality that is created with the aid of the self-fusion method. The pseudo-modality provides good SNR for layers that are barely perceptible in the noisy B-scan but can over-smooth fine features such as small vessels. By using a fusion network, desired features from each modality can be combined, and the weight of their contribution is adjustable. Evaluated by intensity-based and structural metrics, the result shows that our method can effectively suppress the speckle noise and enhance the contrast between retina layers while the overall structure and small blood vessels are preserved. Compared to the single modality network, our method improves the structural similarity with low noise B-scan from 0.559 ± 0.033 to 0.576 ± 0.031.

伪多模态融合网络视网膜OCT去噪。
光学相干断层扫描(OCT)是一种常用的视网膜成像技术。然而,它受到乘法斑点噪声的影响,可以降低基本解剖结构的可见性,包括血管和组织层。虽然对重复的b扫描帧进行平均可以显著提高信噪比(SNR),但这需要更长的采集时间,这可能会引入运动伪影,并给患者带来不适。在本研究中,我们提出了一种基于学习的方法,该方法利用单帧噪声b扫描的信息和借助自融合方法创建的伪模态。伪模态为在嘈杂的b扫描中几乎无法察觉的层提供了良好的信噪比,但可以过度平滑精细特征,如小血管。通过使用融合网络,可以将每个模态的所需特征组合在一起,并且它们的贡献权重是可调的。结果表明,该方法可以有效地抑制斑点噪声,增强视网膜层间对比度,同时保留整体结构和小血管。与单模态网络相比,我们的方法将低噪声b扫描的结构相似度从0.559±0.033提高到0.576±0.031。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信