{"title":"Kernel-Based Autoencoders for Large-Scale Representation Learning","authors":"Jinzhou Bao, Bo Zhao, Ping Guo","doi":"10.1145/3505688.3505707","DOIUrl":null,"url":null,"abstract":"A primary challenge in kernel-based representation learning comes from the massive data and the excess noise feature. To breakthrough this challenge, this paper investigates a deep stacked autoencoder framework, named improved kernelized pseudoinverse learning autoencoders (IKPILAE), which extracts representation information from each building blocks. The IKPILAE consists of two core modules. The first module is used to extract random features from large-scale training data by the approximate kernel method. The second module is a typical pseudoinverse learning algorithm. To diminish the tendency of overfitting in neural networks, a weight decay regularization term is added to the loss function to learn a more generalized representation. Through numerical experiments on benchmark dataset, we demonstrate that IKPILAE outperforms state-of-the-art methods in the research of kernel-based representation learning.","PeriodicalId":375528,"journal":{"name":"Proceedings of the 7th International Conference on Robotics and Artificial Intelligence","volume":"89 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 7th International Conference on Robotics and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3505688.3505707","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
A primary challenge in kernel-based representation learning comes from the massive data and the excess noise feature. To breakthrough this challenge, this paper investigates a deep stacked autoencoder framework, named improved kernelized pseudoinverse learning autoencoders (IKPILAE), which extracts representation information from each building blocks. The IKPILAE consists of two core modules. The first module is used to extract random features from large-scale training data by the approximate kernel method. The second module is a typical pseudoinverse learning algorithm. To diminish the tendency of overfitting in neural networks, a weight decay regularization term is added to the loss function to learn a more generalized representation. Through numerical experiments on benchmark dataset, we demonstrate that IKPILAE outperforms state-of-the-art methods in the research of kernel-based representation learning.