{"title":"Identifying Simple Shapes to Classify the Big Picture","authors":"Megan Liang, Gabrielle Palado, Will N. Browne","doi":"10.1109/IVCNZ48456.2019.8960989","DOIUrl":null,"url":null,"abstract":"In recent years, Deep Artificial Neural Networks (DNNs) have demonstrated their ability in solving visual classification problems. However, an impediment is transparency where it is difficult to interpret why an object is classified in a particular way. Furthermore, it is also difficult to validate whether a learned model truly represents a problem space. Learning Classifier Systems (LCSs) are an Evolutionary Computation technique capable of producing human-readable rules that explain why an instance has been classified, i.e. the system is fully transparent. However, because they can encode complex relationships between features, they are not best suited to domains with a large number of input features, e.g. classification in pixel images. Thus, the aim of this work is to develop a novel DNN-LCS system where the former extracts features from pixels and the latter classifies objects from these features with clear decision boundaries. Results show that the system can explain its classification decisions on curated image data, e.g. plates have elliptical or rectangular shapes. This work represents a promising step towards explainable artificial intelligence in computer vision.","PeriodicalId":217359,"journal":{"name":"2019 International Conference on Image and Vision Computing New Zealand (IVCNZ)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Image and Vision Computing New Zealand (IVCNZ)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IVCNZ48456.2019.8960989","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
In recent years, Deep Artificial Neural Networks (DNNs) have demonstrated their ability in solving visual classification problems. However, an impediment is transparency where it is difficult to interpret why an object is classified in a particular way. Furthermore, it is also difficult to validate whether a learned model truly represents a problem space. Learning Classifier Systems (LCSs) are an Evolutionary Computation technique capable of producing human-readable rules that explain why an instance has been classified, i.e. the system is fully transparent. However, because they can encode complex relationships between features, they are not best suited to domains with a large number of input features, e.g. classification in pixel images. Thus, the aim of this work is to develop a novel DNN-LCS system where the former extracts features from pixels and the latter classifies objects from these features with clear decision boundaries. Results show that the system can explain its classification decisions on curated image data, e.g. plates have elliptical or rectangular shapes. This work represents a promising step towards explainable artificial intelligence in computer vision.