拜占庭弹性分布式梯度下降的数据编码

Deepesh Data, Linqi Song, S. Diggavi
{"title":"拜占庭弹性分布式梯度下降的数据编码","authors":"Deepesh Data, Linqi Song, S. Diggavi","doi":"10.1109/ALLERTON.2018.8636017","DOIUrl":null,"url":null,"abstract":"We consider distributed gradient computation, where both data and computation are distributed among m worker machines, t of which can be Byzantine adversaries, and a designated (master) node computes the model/parameter vector, iteratively using gradient descent (GD). The Byzantine adversaries can (collaboratively) deviate arbitrarily from their gradient computation. To solve this, we propose a method based on data encoding and (real) error correction to combat the adversarial behavior. We can tolerate up to$t\\leq \\displaystyle \\lfloor\\frac{m-1}{2}\\rfloor$ corrupt worker nodes, which is information-theoretically optimal. Our method does not assume any probability distribution on the data. We develop a sparse encoding scheme which enables computationally efficient data encoding. We demonstrate a trade-off between the number of adversaries tolerated and the resource requirement (storage and computational complexity). As an example, our scheme incurs a constant overhead (storage and computational complexity) over that required by the distributed GD algorithm, without adversaries, for$t\\leq \\displaystyle \\frac{m}{3}$.","PeriodicalId":299280,"journal":{"name":"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"20","resultStr":"{\"title\":\"Data Encoding for Byzantine-Resilient Distributed Gradient Descent\",\"authors\":\"Deepesh Data, Linqi Song, S. Diggavi\",\"doi\":\"10.1109/ALLERTON.2018.8636017\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We consider distributed gradient computation, where both data and computation are distributed among m worker machines, t of which can be Byzantine adversaries, and a designated (master) node computes the model/parameter vector, iteratively using gradient descent (GD). The Byzantine adversaries can (collaboratively) deviate arbitrarily from their gradient computation. To solve this, we propose a method based on data encoding and (real) error correction to combat the adversarial behavior. We can tolerate up to$t\\\\leq \\\\displaystyle \\\\lfloor\\\\frac{m-1}{2}\\\\rfloor$ corrupt worker nodes, which is information-theoretically optimal. Our method does not assume any probability distribution on the data. We develop a sparse encoding scheme which enables computationally efficient data encoding. We demonstrate a trade-off between the number of adversaries tolerated and the resource requirement (storage and computational complexity). As an example, our scheme incurs a constant overhead (storage and computational complexity) over that required by the distributed GD algorithm, without adversaries, for$t\\\\leq \\\\displaystyle \\\\frac{m}{3}$.\",\"PeriodicalId\":299280,\"journal\":{\"name\":\"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"20\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ALLERTON.2018.8636017\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ALLERTON.2018.8636017","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 20

摘要

我们考虑分布式梯度计算,其中数据和计算都分布在m个工作机器中,其中t个可能是拜占庭对手,指定的(主)节点使用梯度下降(GD)迭代计算模型/参数向量。拜占庭对手可以(协同)任意偏离他们的梯度计算。为了解决这个问题,我们提出了一种基于数据编码和(真实)纠错的方法来对抗对抗行为。我们最多可以容忍$t\leq \displaystyle \lfloor\frac{m-1}{2}\rfloor$损坏的工作节点,这在信息理论上是最优的。我们的方法不假设数据有任何概率分布。我们开发了一种稀疏编码方案,使计算效率的数据编码。我们演示了容忍的攻击者数量与资源需求(存储和计算复杂性)之间的权衡。例如,对于$t\leq \displaystyle \frac{m}{3}$,我们的方案比分布式GD算法所需的开销(存储和计算复杂性)更大,没有对手。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Data Encoding for Byzantine-Resilient Distributed Gradient Descent
We consider distributed gradient computation, where both data and computation are distributed among m worker machines, t of which can be Byzantine adversaries, and a designated (master) node computes the model/parameter vector, iteratively using gradient descent (GD). The Byzantine adversaries can (collaboratively) deviate arbitrarily from their gradient computation. To solve this, we propose a method based on data encoding and (real) error correction to combat the adversarial behavior. We can tolerate up to$t\leq \displaystyle \lfloor\frac{m-1}{2}\rfloor$ corrupt worker nodes, which is information-theoretically optimal. Our method does not assume any probability distribution on the data. We develop a sparse encoding scheme which enables computationally efficient data encoding. We demonstrate a trade-off between the number of adversaries tolerated and the resource requirement (storage and computational complexity). As an example, our scheme incurs a constant overhead (storage and computational complexity) over that required by the distributed GD algorithm, without adversaries, for$t\leq \displaystyle \frac{m}{3}$.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信