{"title":"Disentangled Representations by Pseudo-Maximum Mutual Information for Interpreting Multi-Layered Neural Networks","authors":"R. Kamimura","doi":"10.1109/IIAI-AAI50415.2020.00094","DOIUrl":null,"url":null,"abstract":"The present paper aims to propose a new type of information-theoretic method to disentangle complex information to have easily interpretable representations for multi-layered neural networks. By this disentanglement of complex information, multi-layered neural networks can be easily compressed to the simplest ones with simple, linear and individual relations between inputs and outputs. The principal idea is to train neural networks by supposing maximum mutual information states before learning, namely, pseudo-maximum information maximization. This pseudo-maximum information method can greatly facilitate the implementation of maximum information procedures. The method was applied to the well-known Boston housing data set for easily reproducing the present results. The experimental results confirmed that pseudo-mutual information can be used to increase actual mutual information. In addition, when mutual information increased, compressed weights from multi-layered neural networks became similar to the correlation coefficients between inputs and targets of original data set. Thus, the method could successfully show that the main inference mechanism can be based on linear and individual relations between inputs and outputs with additional and peripheral nonlinear relations.","PeriodicalId":188870,"journal":{"name":"2020 9th International Congress on Advanced Applied Informatics (IIAI-AAI)","volume":"142 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 9th International Congress on Advanced Applied Informatics (IIAI-AAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IIAI-AAI50415.2020.00094","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The present paper aims to propose a new type of information-theoretic method to disentangle complex information to have easily interpretable representations for multi-layered neural networks. By this disentanglement of complex information, multi-layered neural networks can be easily compressed to the simplest ones with simple, linear and individual relations between inputs and outputs. The principal idea is to train neural networks by supposing maximum mutual information states before learning, namely, pseudo-maximum information maximization. This pseudo-maximum information method can greatly facilitate the implementation of maximum information procedures. The method was applied to the well-known Boston housing data set for easily reproducing the present results. The experimental results confirmed that pseudo-mutual information can be used to increase actual mutual information. In addition, when mutual information increased, compressed weights from multi-layered neural networks became similar to the correlation coefficients between inputs and targets of original data set. Thus, the method could successfully show that the main inference mechanism can be based on linear and individual relations between inputs and outputs with additional and peripheral nonlinear relations.