William Philipp, R. Yashwanthika, O. K. Sikha, Raul Benitez
{"title":"Generation of Rule-Based Explanations of CNN Classifiers Using Regional Features","authors":"William Philipp, R. Yashwanthika, O. K. Sikha, Raul Benitez","doi":"10.1007/s11063-024-11678-x","DOIUrl":null,"url":null,"abstract":"<p>Although Deep Learning networks generally outperform traditional machine learning approaches based on tailored features, they often lack explainability. To address this issue, numerous methods have been proposed, particularly for image-related tasks such as image classification or object segmentation. These methods generate a heatmap that visually explains the classification problem by identifying the most important regions for the classifier. However, these explanations remain purely visual. To overcome this limitation, we introduce a novel CNN explainability method that identifies the most relevant regions in an image and generates a decision tree based on meaningful regional features, providing a rule-based explanation of the classification model. We evaluated the proposed method on a synthetic blob’s dataset and subsequently applied it to two cell image classification datasets with healthy and pathological patterns.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":"12 1","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Processing Letters","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11063-024-11678-x","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Although Deep Learning networks generally outperform traditional machine learning approaches based on tailored features, they often lack explainability. To address this issue, numerous methods have been proposed, particularly for image-related tasks such as image classification or object segmentation. These methods generate a heatmap that visually explains the classification problem by identifying the most important regions for the classifier. However, these explanations remain purely visual. To overcome this limitation, we introduce a novel CNN explainability method that identifies the most relevant regions in an image and generates a decision tree based on meaningful regional features, providing a rule-based explanation of the classification model. We evaluated the proposed method on a synthetic blob’s dataset and subsequently applied it to two cell image classification datasets with healthy and pathological patterns.
期刊介绍:
Neural Processing Letters is an international journal publishing research results and innovative ideas on all aspects of artificial neural networks. Coverage includes theoretical developments, biological models, new formal modes, learning, applications, software and hardware developments, and prospective researches.
The journal promotes fast exchange of information in the community of neural network researchers and users. The resurgence of interest in the field of artificial neural networks since the beginning of the 1980s is coupled to tremendous research activity in specialized or multidisciplinary groups. Research, however, is not possible without good communication between people and the exchange of information, especially in a field covering such different areas; fast communication is also a key aspect, and this is the reason for Neural Processing Letters