稳健且计算效率高的梯度估计

IF 0.8 4区 数学 Q3 STATISTICS & PROBABILITY
Yibo Yan , Xiaozhou Wang , Riquan Zhang
{"title":"稳健且计算效率高的梯度估计","authors":"Yibo Yan ,&nbsp;Xiaozhou Wang ,&nbsp;Riquan Zhang","doi":"10.1016/j.jspi.2025.106351","DOIUrl":null,"url":null,"abstract":"<div><div>In this paper, we propose a class of estimators based on the robust and computationally efficient gradient estimation for both low- and high-dimensional risk minimization framework. The gradient estimation in this work is constructed using a series of newly proposed univariate robust and efficient mean estimators. Our proposed estimators are obtained iteratively using a variant of the gradient descent method, where the update direction is determined by a robust and computationally efficient gradient. These estimators not only have explicit expressions and can be obtained through arithmetic operations but are also robust to arbitrary outliers in common statistical models. Theoretically, we establish the convergence of the algorithms and derive non-asymptotic error bounds for these iterative estimators. Specifically, we apply our methods to linear and logistic regression models, achieving robust parameter estimates and corresponding excess risk bounds. Unlike previous work, our theoretical results rely on a magnitude function of the outliers, which captures the extent of their deviation from the inliers. Finally, we present extensive simulation experiments on both low- and high-dimensional linear models to demonstrate the superior performance of our proposed estimators compared to several baseline methods.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"242 ","pages":"Article 106351"},"PeriodicalIF":0.8000,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Robust and computationally efficient gradient-based estimation\",\"authors\":\"Yibo Yan ,&nbsp;Xiaozhou Wang ,&nbsp;Riquan Zhang\",\"doi\":\"10.1016/j.jspi.2025.106351\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In this paper, we propose a class of estimators based on the robust and computationally efficient gradient estimation for both low- and high-dimensional risk minimization framework. The gradient estimation in this work is constructed using a series of newly proposed univariate robust and efficient mean estimators. Our proposed estimators are obtained iteratively using a variant of the gradient descent method, where the update direction is determined by a robust and computationally efficient gradient. These estimators not only have explicit expressions and can be obtained through arithmetic operations but are also robust to arbitrary outliers in common statistical models. Theoretically, we establish the convergence of the algorithms and derive non-asymptotic error bounds for these iterative estimators. Specifically, we apply our methods to linear and logistic regression models, achieving robust parameter estimates and corresponding excess risk bounds. Unlike previous work, our theoretical results rely on a magnitude function of the outliers, which captures the extent of their deviation from the inliers. Finally, we present extensive simulation experiments on both low- and high-dimensional linear models to demonstrate the superior performance of our proposed estimators compared to several baseline methods.</div></div>\",\"PeriodicalId\":50039,\"journal\":{\"name\":\"Journal of Statistical Planning and Inference\",\"volume\":\"242 \",\"pages\":\"Article 106351\"},\"PeriodicalIF\":0.8000,\"publicationDate\":\"2025-09-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Statistical Planning and Inference\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0378375825000898\",\"RegionNum\":4,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"STATISTICS & PROBABILITY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Statistical Planning and Inference","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0378375825000898","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"STATISTICS & PROBABILITY","Score":null,"Total":0}
引用次数: 0

摘要

本文针对低维和高维风险最小化框架,提出了一类基于鲁棒性和计算效率高的梯度估计。本文中的梯度估计是使用一系列新提出的单变量稳健高效均值估计量来构造的。我们提出的估计量是使用梯度下降法的一种变体迭代获得的,其中更新方向由一个鲁棒且计算效率高的梯度决定。这些估计量不仅具有显式表达式,可以通过算术运算得到,而且对常见统计模型中的任意离群值具有鲁棒性。在理论上,我们建立了算法的收敛性,并推导了这些迭代估计的非渐近误差界。具体来说,我们将我们的方法应用于线性和逻辑回归模型,实现了鲁棒参数估计和相应的超额风险界限。与以前的工作不同,我们的理论结果依赖于离群值的大小函数,它捕获了离群值与内线的偏差程度。最后,我们在低维和高维线性模型上进行了广泛的模拟实验,以证明与几种基线方法相比,我们提出的估计器具有优越的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Robust and computationally efficient gradient-based estimation
In this paper, we propose a class of estimators based on the robust and computationally efficient gradient estimation for both low- and high-dimensional risk minimization framework. The gradient estimation in this work is constructed using a series of newly proposed univariate robust and efficient mean estimators. Our proposed estimators are obtained iteratively using a variant of the gradient descent method, where the update direction is determined by a robust and computationally efficient gradient. These estimators not only have explicit expressions and can be obtained through arithmetic operations but are also robust to arbitrary outliers in common statistical models. Theoretically, we establish the convergence of the algorithms and derive non-asymptotic error bounds for these iterative estimators. Specifically, we apply our methods to linear and logistic regression models, achieving robust parameter estimates and corresponding excess risk bounds. Unlike previous work, our theoretical results rely on a magnitude function of the outliers, which captures the extent of their deviation from the inliers. Finally, we present extensive simulation experiments on both low- and high-dimensional linear models to demonstrate the superior performance of our proposed estimators compared to several baseline methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Statistical Planning and Inference
Journal of Statistical Planning and Inference 数学-统计学与概率论
CiteScore
2.10
自引率
11.10%
发文量
78
审稿时长
3-6 weeks
期刊介绍: The Journal of Statistical Planning and Inference offers itself as a multifaceted and all-inclusive bridge between classical aspects of statistics and probability, and the emerging interdisciplinary aspects that have a potential of revolutionizing the subject. While we maintain our traditional strength in statistical inference, design, classical probability, and large sample methods, we also have a far more inclusive and broadened scope to keep up with the new problems that confront us as statisticians, mathematicians, and scientists. We publish high quality articles in all branches of statistics, probability, discrete mathematics, machine learning, and bioinformatics. We also especially welcome well written and up to date review articles on fundamental themes of statistics, probability, machine learning, and general biostatistics. Thoughtful letters to the editors, interesting problems in need of a solution, and short notes carrying an element of elegance or beauty are equally welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信