Gradient Compression via Count-Sketch for Analog Federated Learning

Chanho Park, Jinhyun Ahn, Joonhyuk Kang
{"title":"Gradient Compression via Count-Sketch for Analog Federated Learning","authors":"Chanho Park, Jinhyun Ahn, Joonhyuk Kang","doi":"10.1109/MILCOM52596.2021.9653138","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) is an actively studied training protocol for distributed artificial intelligence (AI). One of the challenges for the implementation is a communication bottleneck in the uplink communication from devices to FL server. To address the issue, many researches have been studied on the improvement of communication efficiency. In particular, analog transmission for the wireless implementation provides a new opportunity allowing whole bandwidth to be fully reused at each device. However, it is still necessary to compress the parameters to the allocated communication bandwidth despite the communsication efficiency in analog FL. In this paper, we introduce the count-sketch (CS) algorithm as a compression scheme in analog FL to overcome the limited channel resources. We develop a more communication-efficient FL system by applying CS algorithm to the wireless implementation of FL. Numerical experiments show that the proposed scheme outperforms other bench mark schemes, CA-DSGD and state-of-the-art digital schemes. Furthermore, we have observed that the proposed scheme is considerably robust against transmission power and channel resources.","PeriodicalId":187645,"journal":{"name":"MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MILCOM52596.2021.9653138","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Federated learning (FL) is an actively studied training protocol for distributed artificial intelligence (AI). One of the challenges for the implementation is a communication bottleneck in the uplink communication from devices to FL server. To address the issue, many researches have been studied on the improvement of communication efficiency. In particular, analog transmission for the wireless implementation provides a new opportunity allowing whole bandwidth to be fully reused at each device. However, it is still necessary to compress the parameters to the allocated communication bandwidth despite the communsication efficiency in analog FL. In this paper, we introduce the count-sketch (CS) algorithm as a compression scheme in analog FL to overcome the limited channel resources. We develop a more communication-efficient FL system by applying CS algorithm to the wireless implementation of FL. Numerical experiments show that the proposed scheme outperforms other bench mark schemes, CA-DSGD and state-of-the-art digital schemes. Furthermore, we have observed that the proposed scheme is considerably robust against transmission power and channel resources.
基于计数草图的梯度压缩模拟联邦学习
联邦学习(FL)是一种研究活跃的分布式人工智能(AI)训练协议。实现的挑战之一是从设备到FL服务器的上行通信中的通信瓶颈。为了解决这一问题,人们对提高通信效率进行了许多研究。特别是,无线实现的模拟传输提供了一个新的机会,允许在每个设备上完全重用整个带宽。然而,尽管模拟FL的通信效率很高,但仍然需要将参数压缩到分配的通信带宽。本文介绍了计数草图(CS)算法作为模拟FL的压缩方案,以克服信道资源有限的问题。通过将CS算法应用于FL的无线实现,我们开发了一个通信效率更高的FL系统。数值实验表明,所提出的方案优于其他基准方案、CA-DSGD和最先进的数字方案。此外,我们观察到所提出的方案对传输功率和信道资源具有相当强的鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信