基于子空间支持向量机的多非线性子空间方法

Takuya Kitamura, S. Abe, Yusuke Tanaka
{"title":"基于子空间支持向量机的多非线性子空间方法","authors":"Takuya Kitamura, S. Abe, Yusuke Tanaka","doi":"10.1109/ICMLA.2011.100","DOIUrl":null,"url":null,"abstract":"In this paper, we propose multiple nonlinear subspace methods (MNSMs), in which each class consists of several subspaces with different kernel parameters. For each class and each candidate kernel parameter, we generate the subspace by KPCA, and obtain the projection length of an input vector onto each subspace. Then, for each class, we define the discriminant function by the sum of the weighted lengths. These weights in the discriminant function are optimized by subspace-based support vector machines (SS-SVMs) so that the margin between classes is maximized while minimizing the classification error. Thus, we can weight the subspaces for each class from the standpoint of class separability. Then, the computational cost of the model selection of MNSMs is lower than that of SS-SVMs because for SS-SVMs two hyper-parameters, which are the kernel parameter and the margin parameter, must be chosen before training. We show the advantages of the proposed method by computer experiments with benchmark data sets.","PeriodicalId":439926,"journal":{"name":"2011 10th International Conference on Machine Learning and Applications and Workshops","volume":"82 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Multiple Nonlinear Subspace Methods Using Subspace-based Support Vector Machines\",\"authors\":\"Takuya Kitamura, S. Abe, Yusuke Tanaka\",\"doi\":\"10.1109/ICMLA.2011.100\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we propose multiple nonlinear subspace methods (MNSMs), in which each class consists of several subspaces with different kernel parameters. For each class and each candidate kernel parameter, we generate the subspace by KPCA, and obtain the projection length of an input vector onto each subspace. Then, for each class, we define the discriminant function by the sum of the weighted lengths. These weights in the discriminant function are optimized by subspace-based support vector machines (SS-SVMs) so that the margin between classes is maximized while minimizing the classification error. Thus, we can weight the subspaces for each class from the standpoint of class separability. Then, the computational cost of the model selection of MNSMs is lower than that of SS-SVMs because for SS-SVMs two hyper-parameters, which are the kernel parameter and the margin parameter, must be chosen before training. We show the advantages of the proposed method by computer experiments with benchmark data sets.\",\"PeriodicalId\":439926,\"journal\":{\"name\":\"2011 10th International Conference on Machine Learning and Applications and Workshops\",\"volume\":\"82 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-12-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 10th International Conference on Machine Learning and Applications and Workshops\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMLA.2011.100\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 10th International Conference on Machine Learning and Applications and Workshops","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA.2011.100","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

本文提出了多非线性子空间方法(mnsm),其中每一类方法由若干具有不同核参数的子空间组成。对于每个类和每个候选核参数,我们通过KPCA生成子空间,并获得输入向量在每个子空间上的投影长度。然后,对于每一类,我们用加权长度的和来定义判别函数。通过基于子空间的支持向量机(ss - svm)对这些权值进行优化,使分类误差最小化的同时使分类间的余量最大化。因此,我们可以从类可分性的角度对每个类的子空间进行加权。然后,由于在训练前必须选择核参数和裕度参数两个超参数,因此,微核支持向量机的模型选择的计算成本低于ss - svm。通过基准数据集的计算机实验证明了该方法的优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Multiple Nonlinear Subspace Methods Using Subspace-based Support Vector Machines
In this paper, we propose multiple nonlinear subspace methods (MNSMs), in which each class consists of several subspaces with different kernel parameters. For each class and each candidate kernel parameter, we generate the subspace by KPCA, and obtain the projection length of an input vector onto each subspace. Then, for each class, we define the discriminant function by the sum of the weighted lengths. These weights in the discriminant function are optimized by subspace-based support vector machines (SS-SVMs) so that the margin between classes is maximized while minimizing the classification error. Thus, we can weight the subspaces for each class from the standpoint of class separability. Then, the computational cost of the model selection of MNSMs is lower than that of SS-SVMs because for SS-SVMs two hyper-parameters, which are the kernel parameter and the margin parameter, must be chosen before training. We show the advantages of the proposed method by computer experiments with benchmark data sets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信