{"title":"功能自适应胡贝尔线性回归","authors":"Ling Peng, Xiaohui Liu, Heng Lian","doi":"arxiv-2409.11053","DOIUrl":null,"url":null,"abstract":"Robust estimation has played an important role in statistical and machine\nlearning. However, its applications to functional linear regression are still\nunder-developed. In this paper, we focus on Huber's loss with a diverging\nrobustness parameter which was previously used in parametric models. Compared\nto other robust methods such as median regression, the distinction is that the\nproposed method aims to estimate the conditional mean robustly, instead of\nestimating the conditional median. We only require $(1+\\kappa)$-th moment\nassumption ($\\kappa>0$) on the noise distribution, and the established error\nbounds match the optimal rate in the least-squares case as soon as $\\kappa\\ge\n1$. We establish convergence rate in probability when the functional predictor\nhas a finite 4-th moment, and finite-sample bound with exponential tail when\nthe functional predictor is Gaussian, in terms of both prediction error and\n$L^2$ error. The results also extend to the case of functional estimation in a\nreproducing kernel Hilbert space (RKHS).","PeriodicalId":501379,"journal":{"name":"arXiv - STAT - Statistics Theory","volume":"33 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Functional Adaptive Huber Linear Regression\",\"authors\":\"Ling Peng, Xiaohui Liu, Heng Lian\",\"doi\":\"arxiv-2409.11053\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Robust estimation has played an important role in statistical and machine\\nlearning. However, its applications to functional linear regression are still\\nunder-developed. In this paper, we focus on Huber's loss with a diverging\\nrobustness parameter which was previously used in parametric models. Compared\\nto other robust methods such as median regression, the distinction is that the\\nproposed method aims to estimate the conditional mean robustly, instead of\\nestimating the conditional median. We only require $(1+\\\\kappa)$-th moment\\nassumption ($\\\\kappa>0$) on the noise distribution, and the established error\\nbounds match the optimal rate in the least-squares case as soon as $\\\\kappa\\\\ge\\n1$. We establish convergence rate in probability when the functional predictor\\nhas a finite 4-th moment, and finite-sample bound with exponential tail when\\nthe functional predictor is Gaussian, in terms of both prediction error and\\n$L^2$ error. The results also extend to the case of functional estimation in a\\nreproducing kernel Hilbert space (RKHS).\",\"PeriodicalId\":501379,\"journal\":{\"name\":\"arXiv - STAT - Statistics Theory\",\"volume\":\"33 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - STAT - Statistics Theory\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11053\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - STAT - Statistics Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11053","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
稳健估计在统计和机器学习中发挥了重要作用。然而,其在函数线性回归中的应用仍未得到充分发展。在本文中,我们将重点放在带有发散稳健性参数的 Huber 损失上,该参数以前曾用于参数模型。与其他稳健方法(如中值回归)相比,本文的区别在于,本文提出的方法旨在稳健地估计条件均值,而不是估计条件中值。我们只需要噪声分布上的 $(1+\kappa)$-th moment 假设($\kappa>0$),只要$\kappa\ge1$,所建立的误差边界就与最小二乘情况下的最优率相匹配。当函数预测器具有有限 4th 矩时,我们建立了概率收敛率;当函数预测器为高斯时,我们建立了具有指数尾部的有限样本约束。这些结果还扩展到了在产生核希尔伯特空间(RKHS)中进行函数估计的情况。
Robust estimation has played an important role in statistical and machine
learning. However, its applications to functional linear regression are still
under-developed. In this paper, we focus on Huber's loss with a diverging
robustness parameter which was previously used in parametric models. Compared
to other robust methods such as median regression, the distinction is that the
proposed method aims to estimate the conditional mean robustly, instead of
estimating the conditional median. We only require $(1+\kappa)$-th moment
assumption ($\kappa>0$) on the noise distribution, and the established error
bounds match the optimal rate in the least-squares case as soon as $\kappa\ge
1$. We establish convergence rate in probability when the functional predictor
has a finite 4-th moment, and finite-sample bound with exponential tail when
the functional predictor is Gaussian, in terms of both prediction error and
$L^2$ error. The results also extend to the case of functional estimation in a
reproducing kernel Hilbert space (RKHS).