{"title":"稀疏连接的自动编码器","authors":"Kavya Gupta, A. Majumdar","doi":"10.1109/IJCNN.2016.7727437","DOIUrl":null,"url":null,"abstract":"This work proposes to learn autoencoders with sparse connections. Prior studies on autoencoders enforced sparsity on the neuronal activity; these are different from our proposed approach - we learn sparse connections. Sparsity in connections helps in learning (and keeping) the important relations while trimming the irrelevant ones. We have tested the performance of our proposed method on two tasks - classification and denoising. For classification we have compared against stacked autneencoders, contractive autoencoders, deep belief network, sparse deep neural network and optimal brain damage neural network; the denoising performance was compared against denoising autoencoder and sparse (activity) autoencoder. In both the tasks our proposed method yields superior results.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Sparsely connected autoencoder\",\"authors\":\"Kavya Gupta, A. Majumdar\",\"doi\":\"10.1109/IJCNN.2016.7727437\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work proposes to learn autoencoders with sparse connections. Prior studies on autoencoders enforced sparsity on the neuronal activity; these are different from our proposed approach - we learn sparse connections. Sparsity in connections helps in learning (and keeping) the important relations while trimming the irrelevant ones. We have tested the performance of our proposed method on two tasks - classification and denoising. For classification we have compared against stacked autneencoders, contractive autoencoders, deep belief network, sparse deep neural network and optimal brain damage neural network; the denoising performance was compared against denoising autoencoder and sparse (activity) autoencoder. In both the tasks our proposed method yields superior results.\",\"PeriodicalId\":109405,\"journal\":{\"name\":\"2016 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-07-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.2016.7727437\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2016.7727437","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
This work proposes to learn autoencoders with sparse connections. Prior studies on autoencoders enforced sparsity on the neuronal activity; these are different from our proposed approach - we learn sparse connections. Sparsity in connections helps in learning (and keeping) the important relations while trimming the irrelevant ones. We have tested the performance of our proposed method on two tasks - classification and denoising. For classification we have compared against stacked autneencoders, contractive autoencoders, deep belief network, sparse deep neural network and optimal brain damage neural network; the denoising performance was compared against denoising autoencoder and sparse (activity) autoencoder. In both the tasks our proposed method yields superior results.