Max Jackson MPhys , Helen Kalirai BSc, PhD , Rumana N. Hussain MBBS, MD , Heinrich Heimann MD, PhD , Yalin Zheng MEng, PhD , Sarah E. Coupland MBBS, PhD
{"title":"应用于临床眼底镜图像的自监督深度学习模型鉴别脉络膜黑色素瘤和痣。","authors":"Max Jackson MPhys , Helen Kalirai BSc, PhD , Rumana N. Hussain MBBS, MD , Heinrich Heimann MD, PhD , Yalin Zheng MEng, PhD , Sarah E. Coupland MBBS, PhD","doi":"10.1016/j.xops.2024.100647","DOIUrl":null,"url":null,"abstract":"<div><h3>Purpose</h3><div>Testing the validity of a self-supervised deep learning (DL) model, RETFound, for use on posterior uveal (choroidal) melanoma (UM) and nevus differentiation.</div></div><div><h3>Design</h3><div>Case-control study.</div></div><div><h3>Subjects</h3><div>Ultrawidefield fundoscopy images, both color and autofluorescence, were used for this study, obtained from 4255 patients seen at the Liverpool Ocular Oncology Center between 1995 and 2020.</div></div><div><h3>Methods</h3><div>After excluding poor-quality images, a total of 18 510 UM, 8671 nevi, and 1192 healthy eye images were analyzed. RETFound, a self-supervised DL model for fundus images, was fine-tuned initially for binary classification of UM versus nevi and then retuned for tertiary classification including the healthy eyes.</div></div><div><h3>Main Outcome Measures</h3><div>The performance metrics used to evaluate the model were: area under the receiver operating characteristic curve (AUROC), accuracy, specificity, sensitivity, F1-score, and Matthew’s correlation coefficient.</div></div><div><h3>Results</h3><div>For the binary classification task, the model achieved an accuracy of 0.83 and an AUROC of 0.90 demonstrating good performance for UM versus nevi differentiation. Similarly, for the tertiary classification task, the model showed a mean accuracy of 0.82 and an AUROC of 0.92.</div></div><div><h3>Conclusions</h3><div>Our findings demonstrate the feasibility of using a self-supervised DL model for differentiation between UM and nevi with high accuracy, in a large cohort with imbalances between images derived from a single center. Validation studies on similarly sized external cohorts are planned to test our model’s potential, considering variation of images of choroidal melanoma and nevi in the clinical setting.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100647"},"PeriodicalIF":3.2000,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11719834/pdf/","citationCount":"0","resultStr":"{\"title\":\"Differentiating Choroidal Melanomas and Nevi Using a Self-Supervised Deep Learning Model Applied to Clinical Fundoscopy Images\",\"authors\":\"Max Jackson MPhys , Helen Kalirai BSc, PhD , Rumana N. Hussain MBBS, MD , Heinrich Heimann MD, PhD , Yalin Zheng MEng, PhD , Sarah E. Coupland MBBS, PhD\",\"doi\":\"10.1016/j.xops.2024.100647\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Purpose</h3><div>Testing the validity of a self-supervised deep learning (DL) model, RETFound, for use on posterior uveal (choroidal) melanoma (UM) and nevus differentiation.</div></div><div><h3>Design</h3><div>Case-control study.</div></div><div><h3>Subjects</h3><div>Ultrawidefield fundoscopy images, both color and autofluorescence, were used for this study, obtained from 4255 patients seen at the Liverpool Ocular Oncology Center between 1995 and 2020.</div></div><div><h3>Methods</h3><div>After excluding poor-quality images, a total of 18 510 UM, 8671 nevi, and 1192 healthy eye images were analyzed. RETFound, a self-supervised DL model for fundus images, was fine-tuned initially for binary classification of UM versus nevi and then retuned for tertiary classification including the healthy eyes.</div></div><div><h3>Main Outcome Measures</h3><div>The performance metrics used to evaluate the model were: area under the receiver operating characteristic curve (AUROC), accuracy, specificity, sensitivity, F1-score, and Matthew’s correlation coefficient.</div></div><div><h3>Results</h3><div>For the binary classification task, the model achieved an accuracy of 0.83 and an AUROC of 0.90 demonstrating good performance for UM versus nevi differentiation. Similarly, for the tertiary classification task, the model showed a mean accuracy of 0.82 and an AUROC of 0.92.</div></div><div><h3>Conclusions</h3><div>Our findings demonstrate the feasibility of using a self-supervised DL model for differentiation between UM and nevi with high accuracy, in a large cohort with imbalances between images derived from a single center. Validation studies on similarly sized external cohorts are planned to test our model’s potential, considering variation of images of choroidal melanoma and nevi in the clinical setting.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>\",\"PeriodicalId\":74363,\"journal\":{\"name\":\"Ophthalmology science\",\"volume\":\"5 2\",\"pages\":\"Article 100647\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-11-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11719834/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ophthalmology science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666914524001830\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ophthalmology science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666914524001830","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
Differentiating Choroidal Melanomas and Nevi Using a Self-Supervised Deep Learning Model Applied to Clinical Fundoscopy Images
Purpose
Testing the validity of a self-supervised deep learning (DL) model, RETFound, for use on posterior uveal (choroidal) melanoma (UM) and nevus differentiation.
Design
Case-control study.
Subjects
Ultrawidefield fundoscopy images, both color and autofluorescence, were used for this study, obtained from 4255 patients seen at the Liverpool Ocular Oncology Center between 1995 and 2020.
Methods
After excluding poor-quality images, a total of 18 510 UM, 8671 nevi, and 1192 healthy eye images were analyzed. RETFound, a self-supervised DL model for fundus images, was fine-tuned initially for binary classification of UM versus nevi and then retuned for tertiary classification including the healthy eyes.
Main Outcome Measures
The performance metrics used to evaluate the model were: area under the receiver operating characteristic curve (AUROC), accuracy, specificity, sensitivity, F1-score, and Matthew’s correlation coefficient.
Results
For the binary classification task, the model achieved an accuracy of 0.83 and an AUROC of 0.90 demonstrating good performance for UM versus nevi differentiation. Similarly, for the tertiary classification task, the model showed a mean accuracy of 0.82 and an AUROC of 0.92.
Conclusions
Our findings demonstrate the feasibility of using a self-supervised DL model for differentiation between UM and nevi with high accuracy, in a large cohort with imbalances between images derived from a single center. Validation studies on similarly sized external cohorts are planned to test our model’s potential, considering variation of images of choroidal melanoma and nevi in the clinical setting.
Financial Disclosure(s)
Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.