Error analysis of classification learning algorithms based on LUMs loss

IF 1.3 Q3 COMPUTER SCIENCE, THEORY & METHODS
Xuqing He, Hongwei Sun
{"title":"Error analysis of classification learning algorithms based on LUMs loss","authors":"Xuqing He, Hongwei Sun","doi":"10.3934/mfc.2022028","DOIUrl":null,"url":null,"abstract":"<p style='text-indent:20px;'>In this paper, we study the learning performance of regularized large-margin unified machines (LUMs) for classification problem. The hypothesis space is taken to be a reproducing kernel Hilbert space <inline-formula><tex-math id=\"M1\">\\begin{document}$ {\\mathcal H}_K $\\end{document}</tex-math></inline-formula>, and the penalty term is denoted by the norm of the function in <inline-formula><tex-math id=\"M2\">\\begin{document}$ {\\mathcal H}_K $\\end{document}</tex-math></inline-formula>. Since the LUM loss functions are differentiable and convex, so the data piling phenomena can be avoided when dealing with the high-dimension low-sample size data. The error analysis of this classification learning machine mainly lies upon the comparison theorem [<xref ref-type=\"bibr\" rid=\"b3\">3</xref>] which ensures that the excess classification error can be bounded by the excess generalization error. Under a mild source condition which shows that the minimizer <inline-formula><tex-math id=\"M3\">\\begin{document}$ f_V $\\end{document}</tex-math></inline-formula> of the generalization error can be approximated by the hypothesis space <inline-formula><tex-math id=\"M4\">\\begin{document}$ {\\mathcal H}_K $\\end{document}</tex-math></inline-formula>, and by a leave one out variant technique proposed in [<xref ref-type=\"bibr\" rid=\"b13\">13</xref>], satisfying error bound and learning rate about the mean of excess classification error are deduced.</p>","PeriodicalId":93334,"journal":{"name":"Mathematical foundations of computing","volume":"78 1","pages":"616-624"},"PeriodicalIF":1.3000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mathematical foundations of computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3934/mfc.2022028","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 1

Abstract

In this paper, we study the learning performance of regularized large-margin unified machines (LUMs) for classification problem. The hypothesis space is taken to be a reproducing kernel Hilbert space \begin{document}$ {\mathcal H}_K $\end{document}, and the penalty term is denoted by the norm of the function in \begin{document}$ {\mathcal H}_K $\end{document}. Since the LUM loss functions are differentiable and convex, so the data piling phenomena can be avoided when dealing with the high-dimension low-sample size data. The error analysis of this classification learning machine mainly lies upon the comparison theorem [3] which ensures that the excess classification error can be bounded by the excess generalization error. Under a mild source condition which shows that the minimizer \begin{document}$ f_V $\end{document} of the generalization error can be approximated by the hypothesis space \begin{document}$ {\mathcal H}_K $\end{document}, and by a leave one out variant technique proposed in [13], satisfying error bound and learning rate about the mean of excess classification error are deduced.

基于lum损失的分类学习算法误差分析
In this paper, we study the learning performance of regularized large-margin unified machines (LUMs) for classification problem. The hypothesis space is taken to be a reproducing kernel Hilbert space \begin{document}$ {\mathcal H}_K $\end{document}, and the penalty term is denoted by the norm of the function in \begin{document}$ {\mathcal H}_K $\end{document}. Since the LUM loss functions are differentiable and convex, so the data piling phenomena can be avoided when dealing with the high-dimension low-sample size data. The error analysis of this classification learning machine mainly lies upon the comparison theorem [3] which ensures that the excess classification error can be bounded by the excess generalization error. Under a mild source condition which shows that the minimizer \begin{document}$ f_V $\end{document} of the generalization error can be approximated by the hypothesis space \begin{document}$ {\mathcal H}_K $\end{document}, and by a leave one out variant technique proposed in [13], satisfying error bound and learning rate about the mean of excess classification error are deduced.
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
1.50
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信