Minal Khatri, Adam Voshall, S. Batra, Sukhwinder Kaur, Dr. Jitender S. Deogun
{"title":"基于形式概念分析分类器的可解释图像分类模型","authors":"Minal Khatri, Adam Voshall, S. Batra, Sukhwinder Kaur, Dr. Jitender S. Deogun","doi":"10.29007/rp6q","DOIUrl":null,"url":null,"abstract":"Massive amounts of data gathered over the last decade have contributed significantly to the applicability of deep neural networks. Deep learning is a good technique to process huge amounts of data because they get better as we feed more data into them. However, in the existing literature, a deep neural classifier is often treated as a ”black box” technique because the process is not transparent and the researchers cannot gain information about how the input is associated to the output. In many domains like medicine, interpretability is very critical because of the nature of the application. Our research focuses on adding interpretability to the black box by integrating Formal Concept Analysis (FCA) into the image classification pipeline and convert it into a glass box. Our proposed approach pro- duces a low dimensional feature vector for an image dataset using autoencoder followed by a supervised fine-tuning of features using a deep neural classifier and Linear Discriminant Analysis (LDA). The low dimensional feature vector produced is then processed by FCA based classifier. The FCA framework helps us develop a glass box classifier from which the relationship between the target class and the low dimensional feature set can be derived. Further, it helps the researchers to understand the classification task and refine it. We use the MNIST dataset to test the interfacing between deep neural networks and the FCA classifier. The classifier achieves an accuracy of 98.7% for binary classification and 97.38% for multi-class classification. We compare the performance of the proposed classifier with Convolutional neural networks (CNN) and Random forest.","PeriodicalId":93549,"journal":{"name":"EPiC series in computing","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Interpretable Image Classification Model Using Formal Concept Analysis Based Classifier\",\"authors\":\"Minal Khatri, Adam Voshall, S. Batra, Sukhwinder Kaur, Dr. Jitender S. Deogun\",\"doi\":\"10.29007/rp6q\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Massive amounts of data gathered over the last decade have contributed significantly to the applicability of deep neural networks. Deep learning is a good technique to process huge amounts of data because they get better as we feed more data into them. However, in the existing literature, a deep neural classifier is often treated as a ”black box” technique because the process is not transparent and the researchers cannot gain information about how the input is associated to the output. In many domains like medicine, interpretability is very critical because of the nature of the application. Our research focuses on adding interpretability to the black box by integrating Formal Concept Analysis (FCA) into the image classification pipeline and convert it into a glass box. Our proposed approach pro- duces a low dimensional feature vector for an image dataset using autoencoder followed by a supervised fine-tuning of features using a deep neural classifier and Linear Discriminant Analysis (LDA). The low dimensional feature vector produced is then processed by FCA based classifier. The FCA framework helps us develop a glass box classifier from which the relationship between the target class and the low dimensional feature set can be derived. Further, it helps the researchers to understand the classification task and refine it. We use the MNIST dataset to test the interfacing between deep neural networks and the FCA classifier. The classifier achieves an accuracy of 98.7% for binary classification and 97.38% for multi-class classification. We compare the performance of the proposed classifier with Convolutional neural networks (CNN) and Random forest.\",\"PeriodicalId\":93549,\"journal\":{\"name\":\"EPiC series in computing\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"EPiC series in computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.29007/rp6q\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"EPiC series in computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.29007/rp6q","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Interpretable Image Classification Model Using Formal Concept Analysis Based Classifier
Massive amounts of data gathered over the last decade have contributed significantly to the applicability of deep neural networks. Deep learning is a good technique to process huge amounts of data because they get better as we feed more data into them. However, in the existing literature, a deep neural classifier is often treated as a ”black box” technique because the process is not transparent and the researchers cannot gain information about how the input is associated to the output. In many domains like medicine, interpretability is very critical because of the nature of the application. Our research focuses on adding interpretability to the black box by integrating Formal Concept Analysis (FCA) into the image classification pipeline and convert it into a glass box. Our proposed approach pro- duces a low dimensional feature vector for an image dataset using autoencoder followed by a supervised fine-tuning of features using a deep neural classifier and Linear Discriminant Analysis (LDA). The low dimensional feature vector produced is then processed by FCA based classifier. The FCA framework helps us develop a glass box classifier from which the relationship between the target class and the low dimensional feature set can be derived. Further, it helps the researchers to understand the classification task and refine it. We use the MNIST dataset to test the interfacing between deep neural networks and the FCA classifier. The classifier achieves an accuracy of 98.7% for binary classification and 97.38% for multi-class classification. We compare the performance of the proposed classifier with Convolutional neural networks (CNN) and Random forest.