Kemel的特点和基于Kemel的学习

Fuxiao Tan, Dezhi Han
{"title":"Kemel的特点和基于Kemel的学习","authors":"Fuxiao Tan, Dezhi Han","doi":"10.1109/ISASS.2019.8757761","DOIUrl":null,"url":null,"abstract":"In the application of Support Vector Machines (SVM), if the data points are not linearly separable in the original space, it is desirable to find a mapping function to map the data into the high-dimensional space, and then classify it. However, the mapped target space is often very high-dimensional or even infinite-dimensional, so it is necessary to find a function instead of the operation of finding the inner product of the vector in the high-dimensional space. Thus, this function is named as kernel function. The selection of the kernel function is required to satisfy Mercer’s theorem, that is, the arbitrary Gram matrix of the kernel function in the sample space is a semi-positive definite matrix. Furthermore, kernel method is also an approach to achieve efficient calculation. It can make use of kernel function to carry out synchronous computation of nonlinear mapping in linear learning machine, so that the computational complexity is independent of the dimension of the high-dimensional feature space. Kernel function subtly solves the above problem. In high- dimension, the inner product of the vector can be calculated by the kernel function of the low-dimensional point. This technique is called kernel trick. The advantage of kernel trick is that it does not need to explicitly define the feature space and mapping function, but only need to select a suitable kernel function. In this paper, we first introduce the basic definitions of kernel function and RKHS. On this basis, the least squares learning problem of Gaussian kernel is studied. Finally, the influence of the selection of Gaussian kernel function parameters on the learning algorithm is verified by computer simulation.","PeriodicalId":359959,"journal":{"name":"2019 3rd International Symposium on Autonomous Systems (ISAS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Characteristics of Kemel and Kemel-based Learning\",\"authors\":\"Fuxiao Tan, Dezhi Han\",\"doi\":\"10.1109/ISASS.2019.8757761\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the application of Support Vector Machines (SVM), if the data points are not linearly separable in the original space, it is desirable to find a mapping function to map the data into the high-dimensional space, and then classify it. However, the mapped target space is often very high-dimensional or even infinite-dimensional, so it is necessary to find a function instead of the operation of finding the inner product of the vector in the high-dimensional space. Thus, this function is named as kernel function. The selection of the kernel function is required to satisfy Mercer’s theorem, that is, the arbitrary Gram matrix of the kernel function in the sample space is a semi-positive definite matrix. Furthermore, kernel method is also an approach to achieve efficient calculation. It can make use of kernel function to carry out synchronous computation of nonlinear mapping in linear learning machine, so that the computational complexity is independent of the dimension of the high-dimensional feature space. Kernel function subtly solves the above problem. In high- dimension, the inner product of the vector can be calculated by the kernel function of the low-dimensional point. This technique is called kernel trick. The advantage of kernel trick is that it does not need to explicitly define the feature space and mapping function, but only need to select a suitable kernel function. In this paper, we first introduce the basic definitions of kernel function and RKHS. On this basis, the least squares learning problem of Gaussian kernel is studied. Finally, the influence of the selection of Gaussian kernel function parameters on the learning algorithm is verified by computer simulation.\",\"PeriodicalId\":359959,\"journal\":{\"name\":\"2019 3rd International Symposium on Autonomous Systems (ISAS)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 3rd International Symposium on Autonomous Systems (ISAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISASS.2019.8757761\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 3rd International Symposium on Autonomous Systems (ISAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISASS.2019.8757761","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在支持向量机(SVM)的应用中,如果数据点在原始空间中是不可线性分离的,则需要找到映射函数将数据映射到高维空间中,然后对其进行分类。然而,映射的目标空间往往是非常高维甚至是无限维的,因此需要寻找一个函数,而不是在高维空间中寻找向量的内积的操作。因此,这个函数被命名为核函数。核函数的选择要求满足Mercer定理,即核函数在样本空间中的任意Gram矩阵为半正定矩阵。此外,核方法也是实现高效计算的一种方法。它可以利用核函数在线性学习机中进行非线性映射的同步计算,使计算复杂度与高维特征空间的维数无关。核函数巧妙地解决了上述问题。在高维情况下,向量的内积可以由低维点的核函数来计算。这种技术被称为核技巧。核技巧的优点是不需要显式地定义特征空间和映射函数,只需要选择合适的核函数即可。本文首先介绍了核函数和RKHS的基本定义。在此基础上,研究了高斯核的最小二乘学习问题。最后,通过计算机仿真验证了高斯核函数参数的选取对学习算法的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The Characteristics of Kemel and Kemel-based Learning
In the application of Support Vector Machines (SVM), if the data points are not linearly separable in the original space, it is desirable to find a mapping function to map the data into the high-dimensional space, and then classify it. However, the mapped target space is often very high-dimensional or even infinite-dimensional, so it is necessary to find a function instead of the operation of finding the inner product of the vector in the high-dimensional space. Thus, this function is named as kernel function. The selection of the kernel function is required to satisfy Mercer’s theorem, that is, the arbitrary Gram matrix of the kernel function in the sample space is a semi-positive definite matrix. Furthermore, kernel method is also an approach to achieve efficient calculation. It can make use of kernel function to carry out synchronous computation of nonlinear mapping in linear learning machine, so that the computational complexity is independent of the dimension of the high-dimensional feature space. Kernel function subtly solves the above problem. In high- dimension, the inner product of the vector can be calculated by the kernel function of the low-dimensional point. This technique is called kernel trick. The advantage of kernel trick is that it does not need to explicitly define the feature space and mapping function, but only need to select a suitable kernel function. In this paper, we first introduce the basic definitions of kernel function and RKHS. On this basis, the least squares learning problem of Gaussian kernel is studied. Finally, the influence of the selection of Gaussian kernel function parameters on the learning algorithm is verified by computer simulation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信