OcuViT: A Vision Transformer-Based Approach for Automated Diabetic Retinopathy and AMD Classification.

Faisal Ahmed, M D Joshem Uddin
{"title":"OcuViT: A Vision Transformer-Based Approach for Automated Diabetic Retinopathy and AMD Classification.","authors":"Faisal Ahmed, M D Joshem Uddin","doi":"10.1007/s10278-025-01676-3","DOIUrl":null,"url":null,"abstract":"<p><p>Early detection and accurate classification of retinal diseases, such as diabetic retinopathy (DR) and age-related macular degeneration (AMD), are essential to preventing vision loss and improving patient outcomes. Traditional methods for analyzing retinal fundus images are often manual, prolonged, and rely on the expertise of the clinician, leading to delays in diagnosis and treatment. Recent advances in machine learning, particularly deep learning, have introduced automated systems to assist in retinal disease detection; however, challenges such as computational inefficiency and robustness still remain. This paper proposes a novel approach that utilizes vision transformers (ViT) through transfer learning to address challenges in ophthalmic diagnostics. Using a pre-trained ViT-Base-Patch16-224 model, we fine-tune it for diabetic retinopathy (DR) and age-related macular degeneration (AMD) classification tasks. To adapt the model for retinal fundus images, we implement a streamlined preprocessing pipeline that converts the images into PyTorch tensors and standardizes them, ensuring compatibility with the ViT architecture and improving model performance. We validated our model, OcuViT, on two datasets. We used the APTOS dataset to perform binary and five-level severity classification and the IChallenge-AMD dataset for grading age-related macular degeneration (AMD). In the five-class DR and AMD grading tasks, OcuViT outperforms all existing CNN- and ViT-based methods across multiple metrics, achieving superior accuracy and robustness. For the binary DR task, it delivers highly competitive performance. These results demonstrate that OcuViT effectively leverages ViT-based transfer learning with an efficient preprocessing pipeline, significantly improving the precision and reliability of automated ophthalmic diagnosis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of imaging informatics in medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10278-025-01676-3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Early detection and accurate classification of retinal diseases, such as diabetic retinopathy (DR) and age-related macular degeneration (AMD), are essential to preventing vision loss and improving patient outcomes. Traditional methods for analyzing retinal fundus images are often manual, prolonged, and rely on the expertise of the clinician, leading to delays in diagnosis and treatment. Recent advances in machine learning, particularly deep learning, have introduced automated systems to assist in retinal disease detection; however, challenges such as computational inefficiency and robustness still remain. This paper proposes a novel approach that utilizes vision transformers (ViT) through transfer learning to address challenges in ophthalmic diagnostics. Using a pre-trained ViT-Base-Patch16-224 model, we fine-tune it for diabetic retinopathy (DR) and age-related macular degeneration (AMD) classification tasks. To adapt the model for retinal fundus images, we implement a streamlined preprocessing pipeline that converts the images into PyTorch tensors and standardizes them, ensuring compatibility with the ViT architecture and improving model performance. We validated our model, OcuViT, on two datasets. We used the APTOS dataset to perform binary and five-level severity classification and the IChallenge-AMD dataset for grading age-related macular degeneration (AMD). In the five-class DR and AMD grading tasks, OcuViT outperforms all existing CNN- and ViT-based methods across multiple metrics, achieving superior accuracy and robustness. For the binary DR task, it delivers highly competitive performance. These results demonstrate that OcuViT effectively leverages ViT-based transfer learning with an efficient preprocessing pipeline, significantly improving the precision and reliability of automated ophthalmic diagnosis.

OcuViT:一种基于视觉转换器的糖尿病视网膜病变和AMD自动分类方法。
早期发现和准确分类视网膜疾病,如糖尿病视网膜病变(DR)和年龄相关性黄斑变性(AMD),对于预防视力丧失和改善患者预后至关重要。传统的分析视网膜眼底图像的方法通常是手动的,耗时的,并且依赖于临床医生的专业知识,导致诊断和治疗的延误。机器学习的最新进展,特别是深度学习,已经引入了自动化系统来协助视网膜疾病检测;然而,诸如计算效率低下和鲁棒性等挑战仍然存在。本文提出了一种利用视觉转换器(ViT)通过迁移学习来解决眼科诊断挑战的新方法。使用预训练的ViT-Base-Patch16-224模型,我们对其进行微调,用于糖尿病视网膜病变(DR)和年龄相关性黄斑变性(AMD)的分类任务。为了使模型适应视网膜眼底图像,我们实现了一个简化的预处理管道,将图像转换为PyTorch张量并进行标准化,以确保与ViT架构的兼容性并提高模型性能。我们在两个数据集上验证了我们的模型OcuViT。我们使用APTOS数据集进行二元和五级严重性分类,并使用icchallenge -AMD数据集对年龄相关性黄斑变性(AMD)进行分级。在五类DR和AMD分级任务中,OcuViT在多个指标上优于所有现有的基于CNN和vit的方法,实现了卓越的准确性和鲁棒性。对于二进制容灾任务,它提供了极具竞争力的性能。这些结果表明,OcuViT有效地利用了基于vit的迁移学习和高效的预处理管道,显著提高了眼科自动诊断的精度和可靠性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信