Esmaeil Shakeri, Emad A. Mohammed, Trafford Crump, E. Weis, C. Shields, Sandor R. Ferenczy, B. Far
{"title":"基于深度学习的基于卷积神经网络和SHAP分析的葡萄膜黑色素瘤检测与分类","authors":"Esmaeil Shakeri, Emad A. Mohammed, Trafford Crump, E. Weis, C. Shields, Sandor R. Ferenczy, B. Far","doi":"10.1109/IRI58017.2023.00044","DOIUrl":null,"url":null,"abstract":"Uveal melanoma (UM) is a severe intraocular cancer in adults aged 50-80, often originating from choroidal nevus, a common intraocular tumour. This transformation can lead to vision loss, metastasis, and even death. Early prediction of UM can reduce the risk of death. In this study, we employed transfer learning techniques and four convolutional neural network (CNN)-based architectures to detect UM and enhance the interpretation of diagnostic results. To accomplish this, we manually gathered 854 RGB fundus images from two distinct datasets, representing the right and left eyes of 854 unique patients (427 lesions and 427 non-lesions). Preprocessing steps, such as image conversion, resizing, and data augmentation, were performed before training and validating the classification results. We utilized InceptionV3, Xception, DenseNet121, and DenseNet169 pre-trained models to improve the generalizability and performance of our results, evaluating each architecture on an external validation set. Addressing the issue of interpretability in deep learning (DL) models to minimize the blackbox problem, we employed the SHapley Additive exPlanations (SHAP) analysis approach to identify regions of an eye image that contribute most to the prediction of choroidal nevus (CN). The performance results of the DL models revealed that DenseNet169 achieved the highest accuracy 89%, and lowest loss value 0.65%, for the binary classification of CN. The SHAP findings demonstrate that this method can serve as a tool for interpreting classification results by providing additional context information about individual sample images and facilitating a more comprehensive evaluation of binary classification in CN.","PeriodicalId":290818,"journal":{"name":"2023 IEEE 24th International Conference on Information Reuse and Integration for Data Science (IRI)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Learning-Based Detection and Classification of Uveal Melanoma Using Convolutional Neural Networks and SHAP Analysis\",\"authors\":\"Esmaeil Shakeri, Emad A. Mohammed, Trafford Crump, E. Weis, C. Shields, Sandor R. Ferenczy, B. Far\",\"doi\":\"10.1109/IRI58017.2023.00044\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Uveal melanoma (UM) is a severe intraocular cancer in adults aged 50-80, often originating from choroidal nevus, a common intraocular tumour. This transformation can lead to vision loss, metastasis, and even death. Early prediction of UM can reduce the risk of death. In this study, we employed transfer learning techniques and four convolutional neural network (CNN)-based architectures to detect UM and enhance the interpretation of diagnostic results. To accomplish this, we manually gathered 854 RGB fundus images from two distinct datasets, representing the right and left eyes of 854 unique patients (427 lesions and 427 non-lesions). Preprocessing steps, such as image conversion, resizing, and data augmentation, were performed before training and validating the classification results. We utilized InceptionV3, Xception, DenseNet121, and DenseNet169 pre-trained models to improve the generalizability and performance of our results, evaluating each architecture on an external validation set. Addressing the issue of interpretability in deep learning (DL) models to minimize the blackbox problem, we employed the SHapley Additive exPlanations (SHAP) analysis approach to identify regions of an eye image that contribute most to the prediction of choroidal nevus (CN). The performance results of the DL models revealed that DenseNet169 achieved the highest accuracy 89%, and lowest loss value 0.65%, for the binary classification of CN. The SHAP findings demonstrate that this method can serve as a tool for interpreting classification results by providing additional context information about individual sample images and facilitating a more comprehensive evaluation of binary classification in CN.\",\"PeriodicalId\":290818,\"journal\":{\"name\":\"2023 IEEE 24th International Conference on Information Reuse and Integration for Data Science (IRI)\",\"volume\":\"59 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 24th International Conference on Information Reuse and Integration for Data Science (IRI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IRI58017.2023.00044\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 24th International Conference on Information Reuse and Integration for Data Science (IRI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IRI58017.2023.00044","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Learning-Based Detection and Classification of Uveal Melanoma Using Convolutional Neural Networks and SHAP Analysis
Uveal melanoma (UM) is a severe intraocular cancer in adults aged 50-80, often originating from choroidal nevus, a common intraocular tumour. This transformation can lead to vision loss, metastasis, and even death. Early prediction of UM can reduce the risk of death. In this study, we employed transfer learning techniques and four convolutional neural network (CNN)-based architectures to detect UM and enhance the interpretation of diagnostic results. To accomplish this, we manually gathered 854 RGB fundus images from two distinct datasets, representing the right and left eyes of 854 unique patients (427 lesions and 427 non-lesions). Preprocessing steps, such as image conversion, resizing, and data augmentation, were performed before training and validating the classification results. We utilized InceptionV3, Xception, DenseNet121, and DenseNet169 pre-trained models to improve the generalizability and performance of our results, evaluating each architecture on an external validation set. Addressing the issue of interpretability in deep learning (DL) models to minimize the blackbox problem, we employed the SHapley Additive exPlanations (SHAP) analysis approach to identify regions of an eye image that contribute most to the prediction of choroidal nevus (CN). The performance results of the DL models revealed that DenseNet169 achieved the highest accuracy 89%, and lowest loss value 0.65%, for the binary classification of CN. The SHAP findings demonstrate that this method can serve as a tool for interpreting classification results by providing additional context information about individual sample images and facilitating a more comprehensive evaluation of binary classification in CN.