{"title":"解释使用基于梯度的定位进行可靠茶叶分类的深度学习模型","authors":"Puja Banerjee, Susmita Banerjee, R. P. Barnwal","doi":"10.1109/ICAECC54045.2022.9716699","DOIUrl":null,"url":null,"abstract":"In deep learning solutions there has been a lot of ambiguity about how to make explainability inclusive of a machine learning pipeline. Recently, several deep learning techniques have been introduced to solve increasingly complicated problems with higher predictive capacity. However, this predictive power comes at the cost of high computational complexity and difficult to interpret. While these models often produce very accurate predictions, we need to be able to explain the path followed by such models for decision making. Deep learning models, in general, predict with no or very less interpretable explanations. This lack of explainability makes such models blackbox. Explainable Artificial Intelligence (XAI) aims at transforming this black box approach into a more interpretable one. In this paper, we apply the well known Grad-CAM technique for the explainability of tea-leaf classification problem. The proposed method classifies tea-leaf-bud combinations using pre-trained deep learning models. We add classification explainability in our tea-leaf dataset using the pre-trained model as an input to the Grad-CAM technique to produce class-specific heatmap. We analyzed the results and working of the classification models for their reliability and effectiveness.","PeriodicalId":199351,"journal":{"name":"2022 IEEE Fourth International Conference on Advances in Electronics, Computers and Communications (ICAECC)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Explaining deep-learning models using gradient-based localization for reliable tea-leaves classifications\",\"authors\":\"Puja Banerjee, Susmita Banerjee, R. P. Barnwal\",\"doi\":\"10.1109/ICAECC54045.2022.9716699\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In deep learning solutions there has been a lot of ambiguity about how to make explainability inclusive of a machine learning pipeline. Recently, several deep learning techniques have been introduced to solve increasingly complicated problems with higher predictive capacity. However, this predictive power comes at the cost of high computational complexity and difficult to interpret. While these models often produce very accurate predictions, we need to be able to explain the path followed by such models for decision making. Deep learning models, in general, predict with no or very less interpretable explanations. This lack of explainability makes such models blackbox. Explainable Artificial Intelligence (XAI) aims at transforming this black box approach into a more interpretable one. In this paper, we apply the well known Grad-CAM technique for the explainability of tea-leaf classification problem. The proposed method classifies tea-leaf-bud combinations using pre-trained deep learning models. We add classification explainability in our tea-leaf dataset using the pre-trained model as an input to the Grad-CAM technique to produce class-specific heatmap. We analyzed the results and working of the classification models for their reliability and effectiveness.\",\"PeriodicalId\":199351,\"journal\":{\"name\":\"2022 IEEE Fourth International Conference on Advances in Electronics, Computers and Communications (ICAECC)\",\"volume\":\"100 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE Fourth International Conference on Advances in Electronics, Computers and Communications (ICAECC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAECC54045.2022.9716699\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Fourth International Conference on Advances in Electronics, Computers and Communications (ICAECC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAECC54045.2022.9716699","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Explaining deep-learning models using gradient-based localization for reliable tea-leaves classifications
In deep learning solutions there has been a lot of ambiguity about how to make explainability inclusive of a machine learning pipeline. Recently, several deep learning techniques have been introduced to solve increasingly complicated problems with higher predictive capacity. However, this predictive power comes at the cost of high computational complexity and difficult to interpret. While these models often produce very accurate predictions, we need to be able to explain the path followed by such models for decision making. Deep learning models, in general, predict with no or very less interpretable explanations. This lack of explainability makes such models blackbox. Explainable Artificial Intelligence (XAI) aims at transforming this black box approach into a more interpretable one. In this paper, we apply the well known Grad-CAM technique for the explainability of tea-leaf classification problem. The proposed method classifies tea-leaf-bud combinations using pre-trained deep learning models. We add classification explainability in our tea-leaf dataset using the pre-trained model as an input to the Grad-CAM technique to produce class-specific heatmap. We analyzed the results and working of the classification models for their reliability and effectiveness.