{"title":"拜占庭式稳健高效分布式稀疏性学习:一种代用复合量化回归方法","authors":"Canyi Chen, Zhengtian Zhu","doi":"10.1007/s11222-024-10470-0","DOIUrl":null,"url":null,"abstract":"<p>Distributed statistical learning has gained significant traction recently, mainly due to the availability of unprecedentedly massive datasets. The objective of distributed statistical learning is to learn models by effectively utilizing data scattered across various machines. However, its performance can be impeded by three significant challenges: arbitrary noises, high dimensionality, and machine failures—the latter being specifically referred to as Byzantine failure. To address the first two challenges, we propose leveraging the potential of composite quantile regression in conjunction with the <span>\\(\\ell _1\\)</span> penalty. However, this combination introduces a <i>doubly</i> nonsmooth objective function, posing new challenges. In such scenarios, most existing Byzantine-robust methods exhibit slow sublinear convergence rates and fail to achieve near-optimal statistical convergence rates. To fill this gap, we introduce a novel smoothing procedure that effectively handles the nonsmooth aspects. This innovation allows us to develop a Byzantine-robust sparsity learning algorithm that converges provably to the near-optimal convergence rate <i>linearly</i>. Moreover, we establish support recovery guarantees for our proposed methods. We substantiate the effectiveness of our approaches through comprehensive empirical analyses.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":null,"pages":null},"PeriodicalIF":1.6000,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Byzantine-robust and efficient distributed sparsity learning: a surrogate composite quantile regression approach\",\"authors\":\"Canyi Chen, Zhengtian Zhu\",\"doi\":\"10.1007/s11222-024-10470-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Distributed statistical learning has gained significant traction recently, mainly due to the availability of unprecedentedly massive datasets. The objective of distributed statistical learning is to learn models by effectively utilizing data scattered across various machines. However, its performance can be impeded by three significant challenges: arbitrary noises, high dimensionality, and machine failures—the latter being specifically referred to as Byzantine failure. To address the first two challenges, we propose leveraging the potential of composite quantile regression in conjunction with the <span>\\\\(\\\\ell _1\\\\)</span> penalty. However, this combination introduces a <i>doubly</i> nonsmooth objective function, posing new challenges. In such scenarios, most existing Byzantine-robust methods exhibit slow sublinear convergence rates and fail to achieve near-optimal statistical convergence rates. To fill this gap, we introduce a novel smoothing procedure that effectively handles the nonsmooth aspects. This innovation allows us to develop a Byzantine-robust sparsity learning algorithm that converges provably to the near-optimal convergence rate <i>linearly</i>. Moreover, we establish support recovery guarantees for our proposed methods. We substantiate the effectiveness of our approaches through comprehensive empirical analyses.</p>\",\"PeriodicalId\":22058,\"journal\":{\"name\":\"Statistics and Computing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2024-07-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Statistics and Computing\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1007/s11222-024-10470-0\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Statistics and Computing","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1007/s11222-024-10470-0","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
Byzantine-robust and efficient distributed sparsity learning: a surrogate composite quantile regression approach
Distributed statistical learning has gained significant traction recently, mainly due to the availability of unprecedentedly massive datasets. The objective of distributed statistical learning is to learn models by effectively utilizing data scattered across various machines. However, its performance can be impeded by three significant challenges: arbitrary noises, high dimensionality, and machine failures—the latter being specifically referred to as Byzantine failure. To address the first two challenges, we propose leveraging the potential of composite quantile regression in conjunction with the \(\ell _1\) penalty. However, this combination introduces a doubly nonsmooth objective function, posing new challenges. In such scenarios, most existing Byzantine-robust methods exhibit slow sublinear convergence rates and fail to achieve near-optimal statistical convergence rates. To fill this gap, we introduce a novel smoothing procedure that effectively handles the nonsmooth aspects. This innovation allows us to develop a Byzantine-robust sparsity learning algorithm that converges provably to the near-optimal convergence rate linearly. Moreover, we establish support recovery guarantees for our proposed methods. We substantiate the effectiveness of our approaches through comprehensive empirical analyses.
期刊介绍:
Statistics and Computing is a bi-monthly refereed journal which publishes papers covering the range of the interface between the statistical and computing sciences.
In particular, it addresses the use of statistical concepts in computing science, for example in machine learning, computer vision and data analytics, as well as the use of computers in data modelling, prediction and analysis. Specific topics which are covered include: techniques for evaluating analytically intractable problems such as bootstrap resampling, Markov chain Monte Carlo, sequential Monte Carlo, approximate Bayesian computation, search and optimization methods, stochastic simulation and Monte Carlo, graphics, computer environments, statistical approaches to software errors, information retrieval, machine learning, statistics of databases and database technology, huge data sets and big data analytics, computer algebra, graphical models, image processing, tomography, inverse problems and uncertainty quantification.
In addition, the journal contains original research reports, authoritative review papers, discussed papers, and occasional special issues on particular topics or carrying proceedings of relevant conferences. Statistics and Computing also publishes book review and software review sections.