Self-supervised contrastive learning improves machine learning discrimination of full thickness macular holes from epiretinal membranes in retinal OCT scans.

PLOS digital health Pub Date : 2024-08-26 eCollection Date: 2024-08-01 DOI:10.1371/journal.pdig.0000411
Timothy William Wheeler, Kaitlyn Hunter, Patricia Anne Garcia, Henry Li, Andrew Clark Thomson, Allan Hunter, Courosh Mehanian
{"title":"Self-supervised contrastive learning improves machine learning discrimination of full thickness macular holes from epiretinal membranes in retinal OCT scans.","authors":"Timothy William Wheeler, Kaitlyn Hunter, Patricia Anne Garcia, Henry Li, Andrew Clark Thomson, Allan Hunter, Courosh Mehanian","doi":"10.1371/journal.pdig.0000411","DOIUrl":null,"url":null,"abstract":"<p><p>There is a growing interest in using computer-assisted models for the detection of macular conditions using optical coherence tomography (OCT) data. As the quantity of clinical scan data of specific conditions is limited, these models are typically developed by fine-tuning a generalized network to classify specific macular conditions of interest. Full thickness macular holes (FTMH) present a condition requiring urgent surgical repair to prevent vision loss. Other works on automated FTMH classification have tended to use supervised ImageNet pre-trained networks with good results but leave room for improvement. In this paper, we develop a model for FTMH classification using OCT B-scans around the central foveal region to pre-train a naïve network using contrastive self-supervised learning. We found that self-supervised pre-trained networks outperform ImageNet pre-trained networks despite a small training set size (284 eyes total, 51 FTMH+ eyes, 3 B-scans from each eye). On three replicate data splits, 3D spatial contrast pre-training yields a model with an average F1-score of 1.0 on holdout data (50 eyes total, 10 FTMH+), compared to an average F1-score of 0.831 for FTMH detection by ImageNet pre-trained models. These results demonstrate that even limited data may be applied toward self-supervised pre-training to substantially improve performance for FTMH classification, indicating applicability toward other OCT-based problems.</p>","PeriodicalId":74465,"journal":{"name":"PLOS digital health","volume":"3 8","pages":"e0000411"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11346922/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLOS digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1371/journal.pdig.0000411","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/8/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

There is a growing interest in using computer-assisted models for the detection of macular conditions using optical coherence tomography (OCT) data. As the quantity of clinical scan data of specific conditions is limited, these models are typically developed by fine-tuning a generalized network to classify specific macular conditions of interest. Full thickness macular holes (FTMH) present a condition requiring urgent surgical repair to prevent vision loss. Other works on automated FTMH classification have tended to use supervised ImageNet pre-trained networks with good results but leave room for improvement. In this paper, we develop a model for FTMH classification using OCT B-scans around the central foveal region to pre-train a naïve network using contrastive self-supervised learning. We found that self-supervised pre-trained networks outperform ImageNet pre-trained networks despite a small training set size (284 eyes total, 51 FTMH+ eyes, 3 B-scans from each eye). On three replicate data splits, 3D spatial contrast pre-training yields a model with an average F1-score of 1.0 on holdout data (50 eyes total, 10 FTMH+), compared to an average F1-score of 0.831 for FTMH detection by ImageNet pre-trained models. These results demonstrate that even limited data may be applied toward self-supervised pre-training to substantially improve performance for FTMH classification, indicating applicability toward other OCT-based problems.

自我监督对比学习提高了机器学习对视网膜 OCT 扫描中全厚度黄斑孔与视网膜外膜的辨别能力。
人们对使用计算机辅助模型来检测光学相干断层扫描(OCT)数据中的黄斑病变越来越感兴趣。由于特定病症的临床扫描数据数量有限,这些模型通常是通过微调广义网络来对感兴趣的特定黄斑病症进行分类。黄斑全厚孔(FTMH)是一种需要紧急手术修复以防止视力丧失的病症。其他有关全厚黄斑孔自动分类的研究倾向于使用有监督的 ImageNet 预训练网络,虽然效果不错,但仍有改进的余地。在本文中,我们开发了一种 FTMH 分类模型,利用中央眼窝区域周围的 OCT B 扫描,通过对比式自我监督学习预训练一个天真的网络。我们发现,尽管训练集规模较小(共 284 只眼睛,51 只 FTMH+ 眼睛,每只眼睛 3 个 B 扫描),但自我监督预训练网络的表现优于 ImageNet 预训练网络。在三次重复数据拆分中,三维空间对比度预训练产生的模型在保留数据(共 50 只眼,10 只 FTMH+)上的平均 F1 分数为 1.0,而 ImageNet 预训练模型的 FTMH 检测平均 F1 分数为 0.831。这些结果表明,即使是有限的数据也可以应用于自我监督预训练,从而大幅提高 FTMH 分类的性能,这也表明该方法适用于其他基于 OCT 的问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信