{"title":"面向无监督学习的深度神经网络可视化","authors":"Alexander Bartler, Darius Hinderer, Bin Yang","doi":"10.23919/Eusipco47968.2020.9287730","DOIUrl":null,"url":null,"abstract":"Nowadays, the explainability of deep neural networks is an essential part of machine learning. In the last years, many methods were developed to visualize important regions of an input image for the decision of the deep neural network. Since almost all methods are designed for supervised trained models, we propose in this work a visualization technique for unsupervised trained autoencoders called Gradient-weighted Latent Activation Mapping (Grad-LAM). We adapt the idea of Grad-CAM and propose a novel weighting based on the knowledge of the autoencoder’s decoder. Our method will help to get insights into the highly nonlinear mapping of an input image to a latent space. We show that the visualization maps of Grad-LAM are meaningful on simple datasets like MNIST and the method is even applicable to real-world datasets like ImageNet.","PeriodicalId":6705,"journal":{"name":"2020 28th European Signal Processing Conference (EUSIPCO)","volume":"68 1","pages":"1407-1411"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Grad-LAM: Visualization of Deep Neural Networks for Unsupervised Learning\",\"authors\":\"Alexander Bartler, Darius Hinderer, Bin Yang\",\"doi\":\"10.23919/Eusipco47968.2020.9287730\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Nowadays, the explainability of deep neural networks is an essential part of machine learning. In the last years, many methods were developed to visualize important regions of an input image for the decision of the deep neural network. Since almost all methods are designed for supervised trained models, we propose in this work a visualization technique for unsupervised trained autoencoders called Gradient-weighted Latent Activation Mapping (Grad-LAM). We adapt the idea of Grad-CAM and propose a novel weighting based on the knowledge of the autoencoder’s decoder. Our method will help to get insights into the highly nonlinear mapping of an input image to a latent space. We show that the visualization maps of Grad-LAM are meaningful on simple datasets like MNIST and the method is even applicable to real-world datasets like ImageNet.\",\"PeriodicalId\":6705,\"journal\":{\"name\":\"2020 28th European Signal Processing Conference (EUSIPCO)\",\"volume\":\"68 1\",\"pages\":\"1407-1411\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 28th European Signal Processing Conference (EUSIPCO)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/Eusipco47968.2020.9287730\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 28th European Signal Processing Conference (EUSIPCO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/Eusipco47968.2020.9287730","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Grad-LAM: Visualization of Deep Neural Networks for Unsupervised Learning
Nowadays, the explainability of deep neural networks is an essential part of machine learning. In the last years, many methods were developed to visualize important regions of an input image for the decision of the deep neural network. Since almost all methods are designed for supervised trained models, we propose in this work a visualization technique for unsupervised trained autoencoders called Gradient-weighted Latent Activation Mapping (Grad-LAM). We adapt the idea of Grad-CAM and propose a novel weighting based on the knowledge of the autoencoder’s decoder. Our method will help to get insights into the highly nonlinear mapping of an input image to a latent space. We show that the visualization maps of Grad-LAM are meaningful on simple datasets like MNIST and the method is even applicable to real-world datasets like ImageNet.