An enhanced interpretable deep learning approach for diabetic retinopathy detection

Soha Alrajjou, Edward Kwadwo Boahen, Chunyun Meng, Keyang Cheng
{"title":"An enhanced interpretable deep learning approach for diabetic retinopathy detection","authors":"Soha Alrajjou, Edward Kwadwo Boahen, Chunyun Meng, Keyang Cheng","doi":"10.1109/CyberC55534.2022.00029","DOIUrl":null,"url":null,"abstract":"Diabetic Retinopathy (DR) is a consequence of type1 or type-2 diabetes. It is critical to identify complications early since they may result in visual issues such as retinal detachment, vitreous hemorrhage, and glaucoma. The interpretability of automated classifiers for medical diagnoses such as diabetic retinopathy is critical. The primary issue is the difficulties inherent in inferring reasonable conclusions from them. In recent years, numerous efforts have been made to transform deep learning classifiers from statistical black box machines with high confidence to self-explanatory models. The concern of effective data preprocessing before classification remains unsolved. Although the application of machine Learning schemes has proven to be effective when trained in a supervised way, it still has limitations with data redundancy, feature selection, and human expert interference. Hence, a combinatorial deep learning approach is proposed to interpret diabetic retinopathy (DR) detection. The proposed method combines the Shapley Additive Explainability (SHAP) and Local 127:1679 Model-Agnostic Explanations (LIME) to analyze the deep learning output effectively. Results from our experiment show that our proposed approach outperformed the existing schemes in detecting DR.","PeriodicalId":234632,"journal":{"name":"2022 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CyberC55534.2022.00029","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Diabetic Retinopathy (DR) is a consequence of type1 or type-2 diabetes. It is critical to identify complications early since they may result in visual issues such as retinal detachment, vitreous hemorrhage, and glaucoma. The interpretability of automated classifiers for medical diagnoses such as diabetic retinopathy is critical. The primary issue is the difficulties inherent in inferring reasonable conclusions from them. In recent years, numerous efforts have been made to transform deep learning classifiers from statistical black box machines with high confidence to self-explanatory models. The concern of effective data preprocessing before classification remains unsolved. Although the application of machine Learning schemes has proven to be effective when trained in a supervised way, it still has limitations with data redundancy, feature selection, and human expert interference. Hence, a combinatorial deep learning approach is proposed to interpret diabetic retinopathy (DR) detection. The proposed method combines the Shapley Additive Explainability (SHAP) and Local 127:1679 Model-Agnostic Explanations (LIME) to analyze the deep learning output effectively. Results from our experiment show that our proposed approach outperformed the existing schemes in detecting DR.
一种用于糖尿病视网膜病变检测的增强可解释深度学习方法
糖尿病视网膜病变(DR)是1型或2型糖尿病的后果。早期发现并发症是至关重要的,因为它们可能导致视力问题,如视网膜脱离、玻璃体出血和青光眼。医学诊断如糖尿病视网膜病变的自动分类器的可解释性是至关重要的。主要的问题是从它们推断出合理的结论所固有的困难。近年来,人们做出了许多努力,将深度学习分类器从具有高置信度的统计黑匣子机器转变为自解释模型。如何在分类前对数据进行有效的预处理仍然是一个有待解决的问题。虽然机器学习方案的应用已被证明在有监督的方式下训练是有效的,但它仍然存在数据冗余、特征选择和人类专家干扰的局限性。因此,提出了一种组合深度学习方法来解释糖尿病视网膜病变(DR)的检测。该方法结合Shapley加性可解释性(SHAP)和Local 127:1679模型不可知论解释(LIME)来有效分析深度学习输出。实验结果表明,我们提出的方法在检测DR方面优于现有的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信