A multivariate Jacobi polynomials regression estimator associated with an ANOVA decomposition model

Pub Date : 2024-02-26 DOI:10.1007/s00184-024-00954-4
{"title":"A multivariate Jacobi polynomials regression estimator associated with an ANOVA decomposition model","authors":"","doi":"10.1007/s00184-024-00954-4","DOIUrl":null,"url":null,"abstract":"<h3>Abstract</h3> <p>In this work, we construct a stable and fairly fast estimator for solving multidimensional non-parametric regression problems. The proposed estimator is based on the use of a novel and special system of multivariate Jacobi polynomials that generate a basis for a reduced size of <span> <span>\\(d-\\)</span> </span>variate finite dimensional polynomials space. An ANOVA decomposition trick has been used for building this space. Also, by using some results from the theory of positive definite random matrices, we show that the proposed estimator is stable under the condition that the i.i.d. <span> <span>\\(d-\\)</span> </span>dimensional random sampling training points follow a <span> <span>\\(d-\\)</span> </span>dimensional Beta distribution. In addition, we provide the reader with an estimate for the <span> <span>\\(L^2-\\)</span> </span>risk error of the estimator. This risk error depends on the <span> <span>\\(L^2-\\)</span> </span>error of the orthogonal projection error of the regression function over the considered polynomials space. An involved study of this orthogonal projection error is done under the condition that the regression function belongs to a given weighted Sobolev space. Thanks to this novel estimate of the orthogonal projection error, we give the optimal convergence rate of our estimator. Furthermore, we give a regularized extension version of our estimator, that is capable of handling random sampling training vectors drawn according to an unknown multivariate pdf. Moreover, we derive an upper bound for the empirical risk error of this regularized estimator. Finally, we give some numerical simulations that illustrate the various theoretical results of this work. In particular, we provide simulations on a real data that compares the performance of our estimator with some existing and popular NP regression estimators.</p>","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1007/s00184-024-00954-4","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this work, we construct a stable and fairly fast estimator for solving multidimensional non-parametric regression problems. The proposed estimator is based on the use of a novel and special system of multivariate Jacobi polynomials that generate a basis for a reduced size of \(d-\) variate finite dimensional polynomials space. An ANOVA decomposition trick has been used for building this space. Also, by using some results from the theory of positive definite random matrices, we show that the proposed estimator is stable under the condition that the i.i.d. \(d-\) dimensional random sampling training points follow a \(d-\) dimensional Beta distribution. In addition, we provide the reader with an estimate for the \(L^2-\) risk error of the estimator. This risk error depends on the \(L^2-\) error of the orthogonal projection error of the regression function over the considered polynomials space. An involved study of this orthogonal projection error is done under the condition that the regression function belongs to a given weighted Sobolev space. Thanks to this novel estimate of the orthogonal projection error, we give the optimal convergence rate of our estimator. Furthermore, we give a regularized extension version of our estimator, that is capable of handling random sampling training vectors drawn according to an unknown multivariate pdf. Moreover, we derive an upper bound for the empirical risk error of this regularized estimator. Finally, we give some numerical simulations that illustrate the various theoretical results of this work. In particular, we provide simulations on a real data that compares the performance of our estimator with some existing and popular NP regression estimators.

分享
查看原文
与方差分解模型相关的多变量雅可比多项式回归估计器
摘要 在这项工作中,我们为解决多维非参数回归问题构建了一个稳定且相当快速的估计器。所提出的估计器基于使用一个新颖而特殊的多变量雅可比多项式系统,该系统可生成一个缩小了的(d-\)变量有限维多项式空间的基础。方差分解技巧被用于构建这个空间。同时,通过使用正定随机矩阵理论中的一些结果,我们证明了在 i.i.d. \(d-\)维随机抽样训练点遵循 \(d-\)维 Beta 分布的条件下,所提出的估计器是稳定的。此外,我们还为读者提供了估计器的\(L^2-\) 风险误差的估计值。这个风险误差取决于回归函数在所考虑的多项式空间上的正交投影误差(\(L^2-\) error)。在回归函数属于给定加权索波列夫空间的条件下,对这种正交投影误差进行了深入研究。得益于对正交投影误差的新估计,我们给出了估计器的最佳收敛率。此外,我们还给出了估计器的正则化扩展版本,该版本能够处理根据未知多变量 pdf 抽取的随机抽样训练向量。此外,我们还推导出了该正则化估计器的经验风险误差上限。最后,我们给出了一些数值模拟,以说明这项工作的各种理论结果。特别是,我们在真实数据上进行了模拟,比较了我们的估计器与现有的一些流行的 NP 回归估计器的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信