Kernel-Based Autoencoders for Large-Scale Representation Learning

Jinzhou Bao, Bo Zhao, Ping Guo
{"title":"Kernel-Based Autoencoders for Large-Scale Representation Learning","authors":"Jinzhou Bao, Bo Zhao, Ping Guo","doi":"10.1145/3505688.3505707","DOIUrl":null,"url":null,"abstract":"A primary challenge in kernel-based representation learning comes from the massive data and the excess noise feature. To breakthrough this challenge, this paper investigates a deep stacked autoencoder framework, named improved kernelized pseudoinverse learning autoencoders (IKPILAE), which extracts representation information from each building blocks. The IKPILAE consists of two core modules. The first module is used to extract random features from large-scale training data by the approximate kernel method. The second module is a typical pseudoinverse learning algorithm. To diminish the tendency of overfitting in neural networks, a weight decay regularization term is added to the loss function to learn a more generalized representation. Through numerical experiments on benchmark dataset, we demonstrate that IKPILAE outperforms state-of-the-art methods in the research of kernel-based representation learning.","PeriodicalId":375528,"journal":{"name":"Proceedings of the 7th International Conference on Robotics and Artificial Intelligence","volume":"89 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 7th International Conference on Robotics and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3505688.3505707","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

A primary challenge in kernel-based representation learning comes from the massive data and the excess noise feature. To breakthrough this challenge, this paper investigates a deep stacked autoencoder framework, named improved kernelized pseudoinverse learning autoencoders (IKPILAE), which extracts representation information from each building blocks. The IKPILAE consists of two core modules. The first module is used to extract random features from large-scale training data by the approximate kernel method. The second module is a typical pseudoinverse learning algorithm. To diminish the tendency of overfitting in neural networks, a weight decay regularization term is added to the loss function to learn a more generalized representation. Through numerical experiments on benchmark dataset, we demonstrate that IKPILAE outperforms state-of-the-art methods in the research of kernel-based representation learning.
基于核的大规模表示学习自编码器
基于核的表示学习的主要挑战来自于海量数据和过量噪声特征。为了突破这一挑战,本文研究了一种深度堆叠自编码器框架,称为改进的核化伪逆学习自编码器(IKPILAE),它从每个构建块中提取表示信息。IKPILAE由两个核心模块组成。第一个模块使用近似核方法从大规模训练数据中提取随机特征。第二个模块是一个典型的伪逆学习算法。为了减少神经网络中过度拟合的倾向,在损失函数中加入一个权衰减正则化项来学习更广义的表示。通过对基准数据集的数值实验,我们证明了IKPILAE在基于核的表示学习研究中优于最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信