Ivan Cík, Andrindrasana David Rasamoelina, M. Mach, P. Sinčák
{"title":"Explaining Deep Neural Network using Layer-wise Relevance Propagation and Integrated Gradients","authors":"Ivan Cík, Andrindrasana David Rasamoelina, M. Mach, P. Sinčák","doi":"10.1109/SAMI50585.2021.9378686","DOIUrl":null,"url":null,"abstract":"Machine learning has become an integral part of technology in today's world. The field of artificial intelligence is the subject of research by a wide scientific community. In particular, through improved methodology, the availability of big data, and increased computing power, today's machine learning algorithms can achieve excellent performance that sometimes even exceeds the human level. However, due to their nested nonlinear structure, these models are generally considered to be “Black boxes” that do not provide any information about what exactly leads them to provide a specific output. This raised the need to interpret these algorithms and understand how they work as they are applied even in areas where they can cause critical damage. This article describes Integrated Gradients [1] and Layer-wise Relevance Propagation [2] methods and presents individual experiments with. In experiments we have used well-known datasets like MNIST[3], MNIST-Fashion dataset[4], Imagenette and Imagewoof which are subsets of ImageNet [5].","PeriodicalId":402414,"journal":{"name":"2021 IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SAMI50585.2021.9378686","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Machine learning has become an integral part of technology in today's world. The field of artificial intelligence is the subject of research by a wide scientific community. In particular, through improved methodology, the availability of big data, and increased computing power, today's machine learning algorithms can achieve excellent performance that sometimes even exceeds the human level. However, due to their nested nonlinear structure, these models are generally considered to be “Black boxes” that do not provide any information about what exactly leads them to provide a specific output. This raised the need to interpret these algorithms and understand how they work as they are applied even in areas where they can cause critical damage. This article describes Integrated Gradients [1] and Layer-wise Relevance Propagation [2] methods and presents individual experiments with. In experiments we have used well-known datasets like MNIST[3], MNIST-Fashion dataset[4], Imagenette and Imagewoof which are subsets of ImageNet [5].