DETECTION AND LOCALIZATION OF RETINAL BREAKS IN ULTRAWIDEFIELD FUNDUS PHOTOGRAPHY USING a YOLO v3 ARCHITECTURE-BASED DEEP LEARNING MODEL.

Richul Oh, Baek-Lok Oh, Eun Kyoung Lee, Un Chul Park, Hyeong Gon Yu, Chang Ki Yoon
{"title":"DETECTION AND LOCALIZATION OF RETINAL BREAKS IN ULTRAWIDEFIELD FUNDUS PHOTOGRAPHY USING a YOLO v3 ARCHITECTURE-BASED DEEP LEARNING MODEL.","authors":"Richul Oh,&nbsp;Baek-Lok Oh,&nbsp;Eun Kyoung Lee,&nbsp;Un Chul Park,&nbsp;Hyeong Gon Yu,&nbsp;Chang Ki Yoon","doi":"10.1097/IAE.0000000000003550","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>We aimed to develop a deep learning model for detecting and localizing retinal breaks in ultrawidefield fundus (UWF) images.</p><p><strong>Methods: </strong>We retrospectively enrolled treatment-naive patients diagnosed with retinal break or rhegmatogenous retinal detachment and who had UWF images. The YOLO v3 architecture backbone was used to develop the model, using transfer learning. The performance of the model was evaluated using per-image classification and per-object detection.</p><p><strong>Results: </strong>Overall, 4,505 UWF images from 940 patients were used in the current study. Among them, 306 UWF images from 84 patients were included in the test set. In per-object detection, the average precision for the object detection model considering every retinal break was 0.840. With the best threshold, the overall precision, recall, and F1 score were 0.6800, 0.9189, and 0.7816, respectively. In the per-image classification, the model showed an area under the receiver operating characteristic curve of 0.957 within the test set. The overall accuracy, sensitivity, and specificity in the test data set were 0.9085, 0.8966, and 0.9158, respectively.</p><p><strong>Conclusion: </strong>The UWF image-based deep learning model evaluated in the current study performed well in diagnosing and locating retinal breaks.</p>","PeriodicalId":377573,"journal":{"name":"Retina (Philadelphia, Pa.)","volume":" ","pages":"1889-1896"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Retina (Philadelphia, Pa.)","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1097/IAE.0000000000003550","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: We aimed to develop a deep learning model for detecting and localizing retinal breaks in ultrawidefield fundus (UWF) images.

Methods: We retrospectively enrolled treatment-naive patients diagnosed with retinal break or rhegmatogenous retinal detachment and who had UWF images. The YOLO v3 architecture backbone was used to develop the model, using transfer learning. The performance of the model was evaluated using per-image classification and per-object detection.

Results: Overall, 4,505 UWF images from 940 patients were used in the current study. Among them, 306 UWF images from 84 patients were included in the test set. In per-object detection, the average precision for the object detection model considering every retinal break was 0.840. With the best threshold, the overall precision, recall, and F1 score were 0.6800, 0.9189, and 0.7816, respectively. In the per-image classification, the model showed an area under the receiver operating characteristic curve of 0.957 within the test set. The overall accuracy, sensitivity, and specificity in the test data set were 0.9085, 0.8966, and 0.9158, respectively.

Conclusion: The UWF image-based deep learning model evaluated in the current study performed well in diagnosing and locating retinal breaks.

基于YOLO v3架构的深度学习模型在超宽视场眼底摄影中视网膜断裂的检测与定位。
目的:我们旨在建立一个深度学习模型来检测和定位超宽视场眼底(UWF)图像中的视网膜断裂。方法:我们回顾性地纳入诊断为视网膜破裂或孔源性视网膜脱离并有UWF图像的未接受治疗的患者。使用YOLO v3架构主干开发模型,使用迁移学习。使用逐图像分类和逐目标检测来评估模型的性能。结果:总的来说,目前的研究使用了来自940名患者的4,505张UWF图像。其中84例患者的306张UWF图像被纳入测试集。在单目标检测中,考虑每一个视网膜断裂的目标检测模型的平均精度为0.840。在最佳阈值下,总体准确率为0.6800,召回率为0.9189,F1得分为0.7816。在逐图分类中,该模型在测试集中显示接收者工作特征曲线下的面积为0.957。测试数据集的总体准确性、敏感性和特异性分别为0.9085、0.8966和0.9158。结论:本研究评估的基于UWF图像的深度学习模型在视网膜断裂诊断和定位方面具有良好的效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信