线性回归的拜占庭容错并行随机梯度下降

Nirupam Gupta, N. Vaidya
{"title":"线性回归的拜占庭容错并行随机梯度下降","authors":"Nirupam Gupta, N. Vaidya","doi":"10.1109/ALLERTON.2019.8919735","DOIUrl":null,"url":null,"abstract":"This paper addresses the problem of Byzantine fault-tolerance in parallelized stochastic gradient descent (SGD) method solving for a linear regression problem. We consider a synchronous system comprising of a master and multiple workers, where up to a (known) constant number of workers are Byzantine faulty. Byzantine faulty workers may send incorrect information to the master during an execution of the parallelized SGD method. To mitigate the detrimental impact of Byzantine faulty workers, we replace the averaging of gradients in the traditional parallelized SGD method by a provably more robust gradient aggregation rule. The crux of the proposed gradient aggregation rule is a gradient-filter, named comparative gradient clipping(CGC) filter. We show that the resultant parallelized SGD method obtains a good estimate of the regression parameter even in presence of bounded fraction of Byzantine faulty workers. The upper bound derived for the asymptotic estimation error only grows linearly with the fraction of Byzantine faulty workers.","PeriodicalId":120479,"journal":{"name":"2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"135 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":"{\"title\":\"Byzantine Fault-Tolerant Parallelized Stochastic Gradient Descent for Linear Regression\",\"authors\":\"Nirupam Gupta, N. Vaidya\",\"doi\":\"10.1109/ALLERTON.2019.8919735\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper addresses the problem of Byzantine fault-tolerance in parallelized stochastic gradient descent (SGD) method solving for a linear regression problem. We consider a synchronous system comprising of a master and multiple workers, where up to a (known) constant number of workers are Byzantine faulty. Byzantine faulty workers may send incorrect information to the master during an execution of the parallelized SGD method. To mitigate the detrimental impact of Byzantine faulty workers, we replace the averaging of gradients in the traditional parallelized SGD method by a provably more robust gradient aggregation rule. The crux of the proposed gradient aggregation rule is a gradient-filter, named comparative gradient clipping(CGC) filter. We show that the resultant parallelized SGD method obtains a good estimate of the regression parameter even in presence of bounded fraction of Byzantine faulty workers. The upper bound derived for the asymptotic estimation error only grows linearly with the fraction of Byzantine faulty workers.\",\"PeriodicalId\":120479,\"journal\":{\"name\":\"2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton)\",\"volume\":\"135 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"16\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ALLERTON.2019.8919735\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ALLERTON.2019.8919735","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16

摘要

研究并行化随机梯度下降(SGD)方法求解线性回归问题中的拜占庭容错问题。我们考虑一个由一个主系统和多个工人组成的同步系统,其中(已知的)恒定数量的工人是拜占庭错误的。在执行并行化SGD方法期间,拜占庭错误的工人可能会向主服务器发送错误的信息。为了减轻拜占庭故障工人的不利影响,我们用一种可证明更鲁棒的梯度聚集规则取代了传统并行SGD方法中的梯度平均。提出的梯度聚合规则的核心是一个梯度过滤器,称为比较梯度裁剪(CGC)过滤器。我们证明了所得到的并行SGD方法即使在拜占庭故障工人的有界分数存在的情况下也能很好地估计回归参数。渐近估计误差的上界只随拜占庭故障工人的比例线性增长。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Byzantine Fault-Tolerant Parallelized Stochastic Gradient Descent for Linear Regression
This paper addresses the problem of Byzantine fault-tolerance in parallelized stochastic gradient descent (SGD) method solving for a linear regression problem. We consider a synchronous system comprising of a master and multiple workers, where up to a (known) constant number of workers are Byzantine faulty. Byzantine faulty workers may send incorrect information to the master during an execution of the parallelized SGD method. To mitigate the detrimental impact of Byzantine faulty workers, we replace the averaging of gradients in the traditional parallelized SGD method by a provably more robust gradient aggregation rule. The crux of the proposed gradient aggregation rule is a gradient-filter, named comparative gradient clipping(CGC) filter. We show that the resultant parallelized SGD method obtains a good estimate of the regression parameter even in presence of bounded fraction of Byzantine faulty workers. The upper bound derived for the asymptotic estimation error only grows linearly with the fraction of Byzantine faulty workers.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信