基于差分误差反馈的高效沟通分散学习

IF 4.6 2区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Roula Nassif;Stefan Vlaski;Marco Carpentiero;Vincenzo Matta;Ali H. Sayed
{"title":"基于差分误差反馈的高效沟通分散学习","authors":"Roula Nassif;Stefan Vlaski;Marco Carpentiero;Vincenzo Matta;Ali H. Sayed","doi":"10.1109/TSP.2025.3564416","DOIUrl":null,"url":null,"abstract":"Communication-constrained algorithms for decentralized learning and optimization rely on local updates coupled with the exchange of compressed signals. In this context, <italic>differential quantization</i> is an effective technique to mitigate the negative impact of compression by leveraging correlations between successive iterates. In addition, the use of <italic>error feedback</i>, which consists of incorporating the compression error into subsequent steps, is a powerful mechanism to compensate for the bias caused by the compression. Under error feedback, performance guarantees in the literature have so far focused on algorithms employing a fusion center or a special class of contractive compressors that cannot be implemented with a finite number of bits. In this work, we propose a new <italic>decentralized</i> communication-efficient learning approach that blends differential quantization with error feedback. The approach is specifically tailored for decentralized learning problems where agents have individual risk functions to minimize subject to subspace constraints that require the minimizers across the network to lie in low-dimensional subspaces. This constrained formulation includes consensus or single-task optimization as special cases, and allows for more general task relatedness models such as multitask smoothness and coupled optimization. We show that, under some general conditions on the compression noise, and for sufficiently small step-sizes <inline-formula><tex-math>$\\mu$</tex-math></inline-formula>, the resulting communication-efficient strategy is stable both in terms of mean-square error and average bit rate: by reducing <inline-formula><tex-math>$\\mu$</tex-math></inline-formula>, it is possible to keep the <italic>estimation errors small (on the order of</i> <inline-formula><tex-math>$\\mu$</tex-math></inline-formula><italic>) without increasing indefinitely the bit rate as</i> <inline-formula><tex-math>$\\mu\\rightarrow 0$</tex-math></inline-formula>. The results establish that, in the <italic>small step-size regime</i> and with a <italic>finite number of bits</i>, it is possible to attain the performance achievable in the absence of compression.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"1905-1921"},"PeriodicalIF":4.6000,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Differential Error Feedback for Communication-Efficient Decentralized Learning\",\"authors\":\"Roula Nassif;Stefan Vlaski;Marco Carpentiero;Vincenzo Matta;Ali H. Sayed\",\"doi\":\"10.1109/TSP.2025.3564416\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Communication-constrained algorithms for decentralized learning and optimization rely on local updates coupled with the exchange of compressed signals. In this context, <italic>differential quantization</i> is an effective technique to mitigate the negative impact of compression by leveraging correlations between successive iterates. In addition, the use of <italic>error feedback</i>, which consists of incorporating the compression error into subsequent steps, is a powerful mechanism to compensate for the bias caused by the compression. Under error feedback, performance guarantees in the literature have so far focused on algorithms employing a fusion center or a special class of contractive compressors that cannot be implemented with a finite number of bits. In this work, we propose a new <italic>decentralized</i> communication-efficient learning approach that blends differential quantization with error feedback. The approach is specifically tailored for decentralized learning problems where agents have individual risk functions to minimize subject to subspace constraints that require the minimizers across the network to lie in low-dimensional subspaces. This constrained formulation includes consensus or single-task optimization as special cases, and allows for more general task relatedness models such as multitask smoothness and coupled optimization. We show that, under some general conditions on the compression noise, and for sufficiently small step-sizes <inline-formula><tex-math>$\\\\mu$</tex-math></inline-formula>, the resulting communication-efficient strategy is stable both in terms of mean-square error and average bit rate: by reducing <inline-formula><tex-math>$\\\\mu$</tex-math></inline-formula>, it is possible to keep the <italic>estimation errors small (on the order of</i> <inline-formula><tex-math>$\\\\mu$</tex-math></inline-formula><italic>) without increasing indefinitely the bit rate as</i> <inline-formula><tex-math>$\\\\mu\\\\rightarrow 0$</tex-math></inline-formula>. The results establish that, in the <italic>small step-size regime</i> and with a <italic>finite number of bits</i>, it is possible to attain the performance achievable in the absence of compression.\",\"PeriodicalId\":13330,\"journal\":{\"name\":\"IEEE Transactions on Signal Processing\",\"volume\":\"73 \",\"pages\":\"1905-1921\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2025-04-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10976577/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10976577/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

用于分散学习和优化的通信约束算法依赖于局部更新以及压缩信号的交换。在这种情况下,微分量化是一种有效的技术,可以通过利用连续迭代之间的相关性来减轻压缩的负面影响。此外,误差反馈的使用,包括将压缩误差合并到后续步骤中,是补偿由压缩引起的偏差的强大机制。在错误反馈下,迄今为止,文献中的性能保证主要集中在采用融合中心或特殊类型的压缩器的算法上,这些算法不能用有限的比特数来实现。在这项工作中,我们提出了一种新的分散通信高效学习方法,该方法将微分量化与误差反馈相结合。该方法是专门为分散学习问题量身定制的,其中代理具有个体风险函数以最小化受子空间约束的问题,这些约束要求整个网络的最小化者位于低维子空间中。这种约束公式包括共识或单任务优化作为特殊情况,并允许更一般的任务相关性模型,如多任务平滑和耦合优化。我们表明,在压缩噪声的一些一般条件下,对于足够小的步长$\mu$,所得到的通信效率策略在均方误差和平均比特率方面都是稳定的:通过减小$\mu$,可以保持估计误差小(在$\mu$数量级上),而不会无限期地增加比特率$\mu\rightarrow 0$。结果表明,在小步长和有限位数的情况下,有可能获得在没有压缩的情况下可以实现的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Differential Error Feedback for Communication-Efficient Decentralized Learning
Communication-constrained algorithms for decentralized learning and optimization rely on local updates coupled with the exchange of compressed signals. In this context, differential quantization is an effective technique to mitigate the negative impact of compression by leveraging correlations between successive iterates. In addition, the use of error feedback, which consists of incorporating the compression error into subsequent steps, is a powerful mechanism to compensate for the bias caused by the compression. Under error feedback, performance guarantees in the literature have so far focused on algorithms employing a fusion center or a special class of contractive compressors that cannot be implemented with a finite number of bits. In this work, we propose a new decentralized communication-efficient learning approach that blends differential quantization with error feedback. The approach is specifically tailored for decentralized learning problems where agents have individual risk functions to minimize subject to subspace constraints that require the minimizers across the network to lie in low-dimensional subspaces. This constrained formulation includes consensus or single-task optimization as special cases, and allows for more general task relatedness models such as multitask smoothness and coupled optimization. We show that, under some general conditions on the compression noise, and for sufficiently small step-sizes $\mu$, the resulting communication-efficient strategy is stable both in terms of mean-square error and average bit rate: by reducing $\mu$, it is possible to keep the estimation errors small (on the order of $\mu$) without increasing indefinitely the bit rate as $\mu\rightarrow 0$. The results establish that, in the small step-size regime and with a finite number of bits, it is possible to attain the performance achievable in the absence of compression.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Signal Processing
IEEE Transactions on Signal Processing 工程技术-工程:电子与电气
CiteScore
11.20
自引率
9.30%
发文量
310
审稿时长
3.0 months
期刊介绍: The IEEE Transactions on Signal Processing covers novel theory, algorithms, performance analyses and applications of techniques for the processing, understanding, learning, retrieval, mining, and extraction of information from signals. The term “signal” includes, among others, audio, video, speech, image, communication, geophysical, sonar, radar, medical and musical signals. Examples of topics of interest include, but are not limited to, information processing and the theory and application of filtering, coding, transmitting, estimating, detecting, analyzing, recognizing, synthesizing, recording, and reproducing signals.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信