梯度编码与迭代块杠杆分数采样

IF 2.2 3区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS
Neophytos Charalambides;Mert Pilanci;Alfred O. Hero
{"title":"梯度编码与迭代块杠杆分数采样","authors":"Neophytos Charalambides;Mert Pilanci;Alfred O. Hero","doi":"10.1109/TIT.2024.3420222","DOIUrl":null,"url":null,"abstract":"Gradient coding is a method for mitigating straggling servers in a centralized computing network that uses erasure-coding techniques to distributively carry out first-order optimization methods. Randomized numerical linear algebra uses randomization to develop improved algorithms for large-scale linear algebra computations. In this paper, we propose a method for distributed optimization that combines gradient coding and randomized numerical linear algebra. The proposed method uses a randomized \n<inline-formula> <tex-math>$\\ell _{2}$ </tex-math></inline-formula>\n-subspace embedding and a gradient coding technique to distribute blocks of data to the computational nodes of a centralized network, and at each iteration the central server only requires a small number of computations to obtain the steepest descent update. The novelty of our approach is that the data is replicated according to importance scores, called block leverage scores, in contrast to most gradient coding approaches that uniformly replicate the data blocks. Furthermore, we do not require a decoding step at each iteration, avoiding a bottleneck in previous gradient coding schemes. We show that our approach results in a valid \n<inline-formula> <tex-math>$\\ell _{2}$ </tex-math></inline-formula>\n-subspace embedding, and that our resulting approximation converges to the optimal solution.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"70 9","pages":"6639-6664"},"PeriodicalIF":2.2000,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Gradient Coding With Iterative Block Leverage Score Sampling\",\"authors\":\"Neophytos Charalambides;Mert Pilanci;Alfred O. Hero\",\"doi\":\"10.1109/TIT.2024.3420222\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Gradient coding is a method for mitigating straggling servers in a centralized computing network that uses erasure-coding techniques to distributively carry out first-order optimization methods. Randomized numerical linear algebra uses randomization to develop improved algorithms for large-scale linear algebra computations. In this paper, we propose a method for distributed optimization that combines gradient coding and randomized numerical linear algebra. The proposed method uses a randomized \\n<inline-formula> <tex-math>$\\\\ell _{2}$ </tex-math></inline-formula>\\n-subspace embedding and a gradient coding technique to distribute blocks of data to the computational nodes of a centralized network, and at each iteration the central server only requires a small number of computations to obtain the steepest descent update. The novelty of our approach is that the data is replicated according to importance scores, called block leverage scores, in contrast to most gradient coding approaches that uniformly replicate the data blocks. Furthermore, we do not require a decoding step at each iteration, avoiding a bottleneck in previous gradient coding schemes. We show that our approach results in a valid \\n<inline-formula> <tex-math>$\\\\ell _{2}$ </tex-math></inline-formula>\\n-subspace embedding, and that our resulting approximation converges to the optimal solution.\",\"PeriodicalId\":13494,\"journal\":{\"name\":\"IEEE Transactions on Information Theory\",\"volume\":\"70 9\",\"pages\":\"6639-6664\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2024-06-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Information Theory\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10576059/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Theory","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10576059/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

梯度编码是一种在集中式计算网络中减少服务器滞后的方法,它使用擦除编码技术分布式地执行一阶优化方法。随机化数值线性代数利用随机化来开发大规模线性代数计算的改进算法。在本文中,我们提出了一种结合梯度编码和随机数值线性代数的分布式优化方法。所提出的方法使用随机的$ell _{2}$子空间嵌入和梯度编码技术,将数据块分配到集中式网络的计算节点,在每次迭代时,中央服务器只需进行少量计算即可获得最陡下降更新。我们这种方法的新颖之处在于,数据是根据重要性分数(称为数据块杠杆分数)复制的,而大多数梯度编码方法则是统一复制数据块。此外,我们在每次迭代时都不需要解码步骤,避免了以往梯度编码方案的瓶颈。我们的研究表明,我们的方法可以实现有效的 $\ell _{2}$ 子空间嵌入,而且我们的近似结果可以收敛到最优解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Gradient Coding With Iterative Block Leverage Score Sampling
Gradient coding is a method for mitigating straggling servers in a centralized computing network that uses erasure-coding techniques to distributively carry out first-order optimization methods. Randomized numerical linear algebra uses randomization to develop improved algorithms for large-scale linear algebra computations. In this paper, we propose a method for distributed optimization that combines gradient coding and randomized numerical linear algebra. The proposed method uses a randomized $\ell _{2}$ -subspace embedding and a gradient coding technique to distribute blocks of data to the computational nodes of a centralized network, and at each iteration the central server only requires a small number of computations to obtain the steepest descent update. The novelty of our approach is that the data is replicated according to importance scores, called block leverage scores, in contrast to most gradient coding approaches that uniformly replicate the data blocks. Furthermore, we do not require a decoding step at each iteration, avoiding a bottleneck in previous gradient coding schemes. We show that our approach results in a valid $\ell _{2}$ -subspace embedding, and that our resulting approximation converges to the optimal solution.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory 工程技术-工程:电子与电气
CiteScore
5.70
自引率
20.00%
发文量
514
审稿时长
12 months
期刊介绍: The IEEE Transactions on Information Theory is a journal that publishes theoretical and experimental papers concerned with the transmission, processing, and utilization of information. The boundaries of acceptable subject matter are intentionally not sharply delimited. Rather, it is hoped that as the focus of research activity changes, a flexible policy will permit this Transactions to follow suit. Current appropriate topics are best reflected by recent Tables of Contents; they are summarized in the titles of editorial areas that appear on the inside front cover.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信