{"title":"非线性多层主成分分析的稀疏性和层次聚类解释方法","authors":"N. Koda, Sumio Watanabe","doi":"10.1109/ICMLA.2016.0193","DOIUrl":null,"url":null,"abstract":"Nonlinear multilayer principal component analysis (NMPCA) is well-known as an improved version of principal component analysis (PCA) using a five layer bottleneck neural network. NMPCA enables us to extract nonlinear hidden structure from high dimensional data, however, it has been difficult for users to understand obtained results, because trained results of NMPCA have many different locally optimal parameters depending on initial parameters. There has been no method how to find a few essential structures from many differently trained networks. This paper proposes a new interpretation method of NMPCA by extracting a few essential structures from many differently trained and locally optimal parameters. In the proposed method, firstly the weight parameters are made to be sparsely represented by LASSO training and appropriately ordered using the generalized factor loadings, then classified into a few hierarchical clusters, so that users can understand the extracted results. Its effectiveness is shown by both artificial and real world problems.","PeriodicalId":356182,"journal":{"name":"2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"226 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Interpretation Method of Nonlinear Multilayer Principal Component Analysis by Using Sparsity and Hierarchical Clustering\",\"authors\":\"N. Koda, Sumio Watanabe\",\"doi\":\"10.1109/ICMLA.2016.0193\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Nonlinear multilayer principal component analysis (NMPCA) is well-known as an improved version of principal component analysis (PCA) using a five layer bottleneck neural network. NMPCA enables us to extract nonlinear hidden structure from high dimensional data, however, it has been difficult for users to understand obtained results, because trained results of NMPCA have many different locally optimal parameters depending on initial parameters. There has been no method how to find a few essential structures from many differently trained networks. This paper proposes a new interpretation method of NMPCA by extracting a few essential structures from many differently trained and locally optimal parameters. In the proposed method, firstly the weight parameters are made to be sparsely represented by LASSO training and appropriately ordered using the generalized factor loadings, then classified into a few hierarchical clusters, so that users can understand the extracted results. Its effectiveness is shown by both artificial and real world problems.\",\"PeriodicalId\":356182,\"journal\":{\"name\":\"2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA)\",\"volume\":\"226 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMLA.2016.0193\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA.2016.0193","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Interpretation Method of Nonlinear Multilayer Principal Component Analysis by Using Sparsity and Hierarchical Clustering
Nonlinear multilayer principal component analysis (NMPCA) is well-known as an improved version of principal component analysis (PCA) using a five layer bottleneck neural network. NMPCA enables us to extract nonlinear hidden structure from high dimensional data, however, it has been difficult for users to understand obtained results, because trained results of NMPCA have many different locally optimal parameters depending on initial parameters. There has been no method how to find a few essential structures from many differently trained networks. This paper proposes a new interpretation method of NMPCA by extracting a few essential structures from many differently trained and locally optimal parameters. In the proposed method, firstly the weight parameters are made to be sparsely represented by LASSO training and appropriately ordered using the generalized factor loadings, then classified into a few hierarchical clusters, so that users can understand the extracted results. Its effectiveness is shown by both artificial and real world problems.