Lattice point sets for efficient kernel smoothing models

C. Cervellera, Mauro Gaggero, Danilo Macciò, R. Marcialis
{"title":"Lattice point sets for efficient kernel smoothing models","authors":"C. Cervellera, Mauro Gaggero, Danilo Macciò, R. Marcialis","doi":"10.1109/IJCNN.2015.7280469","DOIUrl":null,"url":null,"abstract":"This work addresses the problem of learning an unknown function from data when local models are employed. In particular, kernel smoothing models are considered, which use kernels in a straightforward fashion by modeling the output as a weighted average of values observed in a neighborhood of the input. Such models are a popular alternative to other kernel paradigms, such as support vector machines (SVM), due to their very light computational burden. The purpose of this work is to prove that a smart deterministic selection of the observation points can be advantageous with respect to input data coming from a pure random sampling. Apart from the theoretical interest, this has a practical implication in all the cases in which one can control the generation of the input samples (e.g., in applications from robotics, dynamic programming, optimization, mechanics, etc.) To this purpose, lattice point sets (LPSs), a special kind of sampling schemes commonly employed for efficient numerical integration, are investigated. It is proved that building local kernel smoothers using LPSs guarantees universal approximation property with better rates with respect to i.i.d. sampling. Then, a rule for automatic kernel width selection, making the computational burden of building the model negligible, is introduced to show how the regular structure of the lattice can lead to practical advantages. Simulation results are also provided to test in practice the performance of the proposed methods.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"30 1","pages":"1-8"},"PeriodicalIF":0.0000,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2015.7280469","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

This work addresses the problem of learning an unknown function from data when local models are employed. In particular, kernel smoothing models are considered, which use kernels in a straightforward fashion by modeling the output as a weighted average of values observed in a neighborhood of the input. Such models are a popular alternative to other kernel paradigms, such as support vector machines (SVM), due to their very light computational burden. The purpose of this work is to prove that a smart deterministic selection of the observation points can be advantageous with respect to input data coming from a pure random sampling. Apart from the theoretical interest, this has a practical implication in all the cases in which one can control the generation of the input samples (e.g., in applications from robotics, dynamic programming, optimization, mechanics, etc.) To this purpose, lattice point sets (LPSs), a special kind of sampling schemes commonly employed for efficient numerical integration, are investigated. It is proved that building local kernel smoothers using LPSs guarantees universal approximation property with better rates with respect to i.i.d. sampling. Then, a rule for automatic kernel width selection, making the computational burden of building the model negligible, is introduced to show how the regular structure of the lattice can lead to practical advantages. Simulation results are also provided to test in practice the performance of the proposed methods.
高效核平滑模型的点阵集
这项工作解决了当使用局部模型时从数据中学习未知函数的问题。特别是,考虑了核平滑模型,它通过将输出建模为输入邻域中观察到的值的加权平均值,以一种直接的方式使用核。由于这些模型的计算负担非常轻,因此它们是其他内核范例(如支持向量机(SVM))的流行替代方案。这项工作的目的是证明,相对于来自纯随机抽样的输入数据,观察点的智能确定性选择是有利的。除了理论兴趣之外,这在所有可以控制输入样本生成的情况下(例如,在机器人,动态规划,优化,力学等应用中)具有实际意义。为此,研究了晶格点集(lps),一种通常用于有效数值积分的特殊采样方案。证明了利用LPSs构建局部核平滑可以保证对i.i.d采样具有较好的普适性和近似率。然后,引入了一个自动核宽度选择规则,使构建模型的计算负担可以忽略不计,以显示晶格的规则结构如何导致实际优势。仿真结果验证了所提方法的实际性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信