Akwasi Asare, Alvin Adjei Broni, Alex Kwasi Asare Dickson, Mary Sagoe, Joshua Makafui Cudjoe
{"title":"Performance of ResNet-18 and InceptionResNetV2 in Automated Detection of Diabetic Retinopathy","authors":"Akwasi Asare, Alvin Adjei Broni, Alex Kwasi Asare Dickson, Mary Sagoe, Joshua Makafui Cudjoe","doi":"10.1002/med4.70023","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>Diabetic retinopathy (DR) is a leading cause of blindness worldwide, particularly in diabetic individuals. Manual detection of DR by ophthalmologists is time-consuming and resource-intensive, making early automated detection essential for mitigating the risk of vision impairment. This study evaluates the effectiveness of two deep learning models, ResNet-18 and InceptionResNetV2, for detecting and classifying DR from retinal fundus images, with the aim of identifying the most suitable model for clinical application.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>A dataset of 3662 retinal fundus images, divided into five DR severity classes, was used to train and test ResNet-18 and InceptionResNetV2. The key performance metrics used to assess classification across the DR stages included testing accuracy, precision, recall, specificity, <i>F</i><sub>1</sub> score, and area under the curve (AUC).</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>ResNet-18 achieved a testing accuracy of 83% and an AUC of 0.946, showing robust generalization across DR stages. InceptionResNetV2 achieved a testing accuracy of 70.4% and an AUC of 0.9305, with high precision in distinguishing “No DR” cases. However, it exhibited overfitting, particularly in “Mild” and “Proliferative DR” classifications, whereas ResNet-18 demonstrated a more stable performance across categories.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>Our results suggest that ResNet-18 holds significant potential as an automated DR detection tool, providing reliable classification and superior generalization across DR stages. Integrating deep learning models such as ResNet-18 into clinical workflows may enhance early DR diagnosis and timely intervention, reducing the risk of vision impairment among patients with diabetes.</p>\n </section>\n </div>","PeriodicalId":100913,"journal":{"name":"Medicine Advances","volume":"3 3","pages":"231-241"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/med4.70023","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medicine Advances","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/med4.70023","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Background
Diabetic retinopathy (DR) is a leading cause of blindness worldwide, particularly in diabetic individuals. Manual detection of DR by ophthalmologists is time-consuming and resource-intensive, making early automated detection essential for mitigating the risk of vision impairment. This study evaluates the effectiveness of two deep learning models, ResNet-18 and InceptionResNetV2, for detecting and classifying DR from retinal fundus images, with the aim of identifying the most suitable model for clinical application.
Methods
A dataset of 3662 retinal fundus images, divided into five DR severity classes, was used to train and test ResNet-18 and InceptionResNetV2. The key performance metrics used to assess classification across the DR stages included testing accuracy, precision, recall, specificity, F1 score, and area under the curve (AUC).
Results
ResNet-18 achieved a testing accuracy of 83% and an AUC of 0.946, showing robust generalization across DR stages. InceptionResNetV2 achieved a testing accuracy of 70.4% and an AUC of 0.9305, with high precision in distinguishing “No DR” cases. However, it exhibited overfitting, particularly in “Mild” and “Proliferative DR” classifications, whereas ResNet-18 demonstrated a more stable performance across categories.
Conclusions
Our results suggest that ResNet-18 holds significant potential as an automated DR detection tool, providing reliable classification and superior generalization across DR stages. Integrating deep learning models such as ResNet-18 into clinical workflows may enhance early DR diagnosis and timely intervention, reducing the risk of vision impairment among patients with diabetes.