A deep quasi-linear kernel composition method for support vector machines

Weite Li, Jinglu Hu, Benhui Chen
{"title":"A deep quasi-linear kernel composition method for support vector machines","authors":"Weite Li, Jinglu Hu, Benhui Chen","doi":"10.1109/IJCNN.2016.7727394","DOIUrl":null,"url":null,"abstract":"In this paper, we introduce a data-dependent kernel called deep quasi-linear kernel, which can directly gain a profit from a pre-trained feedforward deep network. Firstly, a multi-layer gated bilinear classifier is formulated to mimic the functionality of a feed-forward neural network. The only difference between them is that the activation values of hidden units in the multi-layer gated bilinear classifier are dependent on a pre-trained neural network rather than a pre-defined activation function. Secondly, we demonstrate the equivalence between the multi-layer gated bilinear classifier and an SVM with a deep quasi-linear kernel. By deriving a kernel composition function, traditional optimization algorithms for a kernel SVM can be directly implemented to implicitly optimize the parameters of the multi-layer gated bilinear classifier. Experimental results on different data sets show that our proposed classifier obtains an ability to outperform both an SVM with a RBF kernel and the pre-trained feedforward deep network.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2016.7727394","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

In this paper, we introduce a data-dependent kernel called deep quasi-linear kernel, which can directly gain a profit from a pre-trained feedforward deep network. Firstly, a multi-layer gated bilinear classifier is formulated to mimic the functionality of a feed-forward neural network. The only difference between them is that the activation values of hidden units in the multi-layer gated bilinear classifier are dependent on a pre-trained neural network rather than a pre-defined activation function. Secondly, we demonstrate the equivalence between the multi-layer gated bilinear classifier and an SVM with a deep quasi-linear kernel. By deriving a kernel composition function, traditional optimization algorithms for a kernel SVM can be directly implemented to implicitly optimize the parameters of the multi-layer gated bilinear classifier. Experimental results on different data sets show that our proposed classifier obtains an ability to outperform both an SVM with a RBF kernel and the pre-trained feedforward deep network.
支持向量机的深度拟线性核组合方法
本文引入了一种数据相关核,称为深度拟线性核,它可以直接从预训练的前馈深度网络中获益。首先,建立了一个多层门控双线性分类器来模拟前馈神经网络的功能。它们之间的唯一区别是多层门控双线性分类器中隐藏单元的激活值依赖于预训练的神经网络,而不是预定义的激活函数。其次,我们证明了多层门控双线性分类器与具有深度拟线性核的支持向量机之间的等价性。通过推导核组成函数,可以直接实现传统的核支持向量机优化算法来隐式优化多层门控双线性分类器的参数。在不同数据集上的实验结果表明,我们提出的分类器具有优于带RBF核的支持向量机和预训练前馈深度网络的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信