Timothy William Wheeler, Kaitlyn Hunter, Patricia Anne Garcia, Henry Li, Andrew Clark Thomson, Allan Hunter, Courosh Mehanian
{"title":"自我监督对比学习提高了机器学习对视网膜 OCT 扫描中全厚度黄斑孔与视网膜外膜的辨别能力。","authors":"Timothy William Wheeler, Kaitlyn Hunter, Patricia Anne Garcia, Henry Li, Andrew Clark Thomson, Allan Hunter, Courosh Mehanian","doi":"10.1371/journal.pdig.0000411","DOIUrl":null,"url":null,"abstract":"<p><p>There is a growing interest in using computer-assisted models for the detection of macular conditions using optical coherence tomography (OCT) data. As the quantity of clinical scan data of specific conditions is limited, these models are typically developed by fine-tuning a generalized network to classify specific macular conditions of interest. Full thickness macular holes (FTMH) present a condition requiring urgent surgical repair to prevent vision loss. Other works on automated FTMH classification have tended to use supervised ImageNet pre-trained networks with good results but leave room for improvement. In this paper, we develop a model for FTMH classification using OCT B-scans around the central foveal region to pre-train a naïve network using contrastive self-supervised learning. We found that self-supervised pre-trained networks outperform ImageNet pre-trained networks despite a small training set size (284 eyes total, 51 FTMH+ eyes, 3 B-scans from each eye). On three replicate data splits, 3D spatial contrast pre-training yields a model with an average F1-score of 1.0 on holdout data (50 eyes total, 10 FTMH+), compared to an average F1-score of 0.831 for FTMH detection by ImageNet pre-trained models. These results demonstrate that even limited data may be applied toward self-supervised pre-training to substantially improve performance for FTMH classification, indicating applicability toward other OCT-based problems.</p>","PeriodicalId":74465,"journal":{"name":"PLOS digital health","volume":"3 8","pages":"e0000411"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11346922/pdf/","citationCount":"0","resultStr":"{\"title\":\"Self-supervised contrastive learning improves machine learning discrimination of full thickness macular holes from epiretinal membranes in retinal OCT scans.\",\"authors\":\"Timothy William Wheeler, Kaitlyn Hunter, Patricia Anne Garcia, Henry Li, Andrew Clark Thomson, Allan Hunter, Courosh Mehanian\",\"doi\":\"10.1371/journal.pdig.0000411\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>There is a growing interest in using computer-assisted models for the detection of macular conditions using optical coherence tomography (OCT) data. As the quantity of clinical scan data of specific conditions is limited, these models are typically developed by fine-tuning a generalized network to classify specific macular conditions of interest. Full thickness macular holes (FTMH) present a condition requiring urgent surgical repair to prevent vision loss. Other works on automated FTMH classification have tended to use supervised ImageNet pre-trained networks with good results but leave room for improvement. In this paper, we develop a model for FTMH classification using OCT B-scans around the central foveal region to pre-train a naïve network using contrastive self-supervised learning. We found that self-supervised pre-trained networks outperform ImageNet pre-trained networks despite a small training set size (284 eyes total, 51 FTMH+ eyes, 3 B-scans from each eye). On three replicate data splits, 3D spatial contrast pre-training yields a model with an average F1-score of 1.0 on holdout data (50 eyes total, 10 FTMH+), compared to an average F1-score of 0.831 for FTMH detection by ImageNet pre-trained models. These results demonstrate that even limited data may be applied toward self-supervised pre-training to substantially improve performance for FTMH classification, indicating applicability toward other OCT-based problems.</p>\",\"PeriodicalId\":74465,\"journal\":{\"name\":\"PLOS digital health\",\"volume\":\"3 8\",\"pages\":\"e0000411\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11346922/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PLOS digital health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1371/journal.pdig.0000411\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/8/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLOS digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1371/journal.pdig.0000411","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/8/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
人们对使用计算机辅助模型来检测光学相干断层扫描(OCT)数据中的黄斑病变越来越感兴趣。由于特定病症的临床扫描数据数量有限,这些模型通常是通过微调广义网络来对感兴趣的特定黄斑病症进行分类。黄斑全厚孔(FTMH)是一种需要紧急手术修复以防止视力丧失的病症。其他有关全厚黄斑孔自动分类的研究倾向于使用有监督的 ImageNet 预训练网络,虽然效果不错,但仍有改进的余地。在本文中,我们开发了一种 FTMH 分类模型,利用中央眼窝区域周围的 OCT B 扫描,通过对比式自我监督学习预训练一个天真的网络。我们发现,尽管训练集规模较小(共 284 只眼睛,51 只 FTMH+ 眼睛,每只眼睛 3 个 B 扫描),但自我监督预训练网络的表现优于 ImageNet 预训练网络。在三次重复数据拆分中,三维空间对比度预训练产生的模型在保留数据(共 50 只眼,10 只 FTMH+)上的平均 F1 分数为 1.0,而 ImageNet 预训练模型的 FTMH 检测平均 F1 分数为 0.831。这些结果表明,即使是有限的数据也可以应用于自我监督预训练,从而大幅提高 FTMH 分类的性能,这也表明该方法适用于其他基于 OCT 的问题。
Self-supervised contrastive learning improves machine learning discrimination of full thickness macular holes from epiretinal membranes in retinal OCT scans.
There is a growing interest in using computer-assisted models for the detection of macular conditions using optical coherence tomography (OCT) data. As the quantity of clinical scan data of specific conditions is limited, these models are typically developed by fine-tuning a generalized network to classify specific macular conditions of interest. Full thickness macular holes (FTMH) present a condition requiring urgent surgical repair to prevent vision loss. Other works on automated FTMH classification have tended to use supervised ImageNet pre-trained networks with good results but leave room for improvement. In this paper, we develop a model for FTMH classification using OCT B-scans around the central foveal region to pre-train a naïve network using contrastive self-supervised learning. We found that self-supervised pre-trained networks outperform ImageNet pre-trained networks despite a small training set size (284 eyes total, 51 FTMH+ eyes, 3 B-scans from each eye). On three replicate data splits, 3D spatial contrast pre-training yields a model with an average F1-score of 1.0 on holdout data (50 eyes total, 10 FTMH+), compared to an average F1-score of 0.831 for FTMH detection by ImageNet pre-trained models. These results demonstrate that even limited data may be applied toward self-supervised pre-training to substantially improve performance for FTMH classification, indicating applicability toward other OCT-based problems.