{"title":"Fundamental Limits of Distributed Optimization over Multiple Access Channel","authors":"Shubham K. Jha, Prathamesh Mayekar","doi":"10.1109/ITW55543.2023.10161622","DOIUrl":null,"url":null,"abstract":"We consider distributed optimization over a d-dimensional space, where K remote clients send coded gradient estimates over an additive Gaussian Multiple Access Channel (MAC) with noise variance $\\sigma _z^2$. Furthermore, the codewords from the K clients must satisfy the average power constraint of P, resulting in a signal-to-noise ratio (SNR) of $KP/\\sigma _z^2$. In this paper, we study the fundamental limits imposed by MAC on the convergence rate of any distributed optimization algorithm and design optimal communication schemes to achieve these limits. Our first result is a lower bound for the convergence rate showing that compared to the centralized setting, communicating over a MAC imposes a slowdown of $\\sqrt {d/\\frac{1}{2}\\log (1 + {\\text{SNR}})}$ on any protocol. Next, we design a computationally tractable digital communication scheme that matches the lower bound to a logarithmic factor in K when combined with a projected stochastic gradient descent algorithm. At the heart of our communication scheme is a careful combination of several compression and modulation ideas such as quantizing along random bases, Wyner-Ziv compression, modulo-lattice decoding, and amplitude shift keying. We also show that analog coding schemes, which are popular due to their ease of implementation, can give close to optimal convergence rates at low SNR but experience a slowdown of roughly $\\sqrt d$ at high SNR.","PeriodicalId":439800,"journal":{"name":"2023 IEEE Information Theory Workshop (ITW)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Information Theory Workshop (ITW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITW55543.2023.10161622","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We consider distributed optimization over a d-dimensional space, where K remote clients send coded gradient estimates over an additive Gaussian Multiple Access Channel (MAC) with noise variance $\sigma _z^2$. Furthermore, the codewords from the K clients must satisfy the average power constraint of P, resulting in a signal-to-noise ratio (SNR) of $KP/\sigma _z^2$. In this paper, we study the fundamental limits imposed by MAC on the convergence rate of any distributed optimization algorithm and design optimal communication schemes to achieve these limits. Our first result is a lower bound for the convergence rate showing that compared to the centralized setting, communicating over a MAC imposes a slowdown of $\sqrt {d/\frac{1}{2}\log (1 + {\text{SNR}})}$ on any protocol. Next, we design a computationally tractable digital communication scheme that matches the lower bound to a logarithmic factor in K when combined with a projected stochastic gradient descent algorithm. At the heart of our communication scheme is a careful combination of several compression and modulation ideas such as quantizing along random bases, Wyner-Ziv compression, modulo-lattice decoding, and amplitude shift keying. We also show that analog coding schemes, which are popular due to their ease of implementation, can give close to optimal convergence rates at low SNR but experience a slowdown of roughly $\sqrt d$ at high SNR.