Reducing communication overhead through one-shot model pruning in federated learning

IF 2.2 4区 计算机科学 Q3 TELECOMMUNICATIONS
Rómulo Bustincio, Allan M. de Souza, Joahannes B. D. da Costa, Luis F. G. Gonzalez, Luiz F. Bittencourt
{"title":"Reducing communication overhead through one-shot model pruning in federated learning","authors":"Rómulo Bustincio,&nbsp;Allan M. de Souza,&nbsp;Joahannes B. D. da Costa,&nbsp;Luis F. G. Gonzalez,&nbsp;Luiz F. Bittencourt","doi":"10.1007/s12243-025-01097-x","DOIUrl":null,"url":null,"abstract":"<div><p>In the realm of federated learning, a collaborative yet decentralized approach to machine learning, communication efficiency is a critical concern, particularly under constraints of limited bandwidth and resources. This paper evaluates FedSNIP, a novel method that leverages the SNIP (Single-shot Network Pruning based on Connection Sensitivity) technique within this context. By utilizing SNIP, FedSNIP effectively prunes neural networks, converting numerous weights to zero and resulting in sparser weight representations. This substantial reduction in weight density significantly decreases the volume of parameters that need to be communicated to the server, thereby reducing the communication overhead. Our experiments on the CIFAR-10 and UCI-HAR dataset demonstrate that FedSNIP not only lowers the data transmission between clients and the server but also maintains competitive model accuracy, comparable to conventional federated learning models. Additionally, we analyze various compression algorithms applied after pruning, specifically evaluating the compressed sparse row, coordinate list, and compressed sparse column formats to identify the most efficient approach. Our results show that compressed sparse row not only compresses the data more effectively and quickly but also achieves the highest reduction in data size, making it the most suitable format for enhancing the efficiency of federated learning, particularly in scenarios with restricted communication capabilities.</p></div>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"80 9-10","pages":"901 - 913"},"PeriodicalIF":2.2000,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Telecommunications","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s12243-025-01097-x","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

In the realm of federated learning, a collaborative yet decentralized approach to machine learning, communication efficiency is a critical concern, particularly under constraints of limited bandwidth and resources. This paper evaluates FedSNIP, a novel method that leverages the SNIP (Single-shot Network Pruning based on Connection Sensitivity) technique within this context. By utilizing SNIP, FedSNIP effectively prunes neural networks, converting numerous weights to zero and resulting in sparser weight representations. This substantial reduction in weight density significantly decreases the volume of parameters that need to be communicated to the server, thereby reducing the communication overhead. Our experiments on the CIFAR-10 and UCI-HAR dataset demonstrate that FedSNIP not only lowers the data transmission between clients and the server but also maintains competitive model accuracy, comparable to conventional federated learning models. Additionally, we analyze various compression algorithms applied after pruning, specifically evaluating the compressed sparse row, coordinate list, and compressed sparse column formats to identify the most efficient approach. Our results show that compressed sparse row not only compresses the data more effectively and quickly but also achieves the highest reduction in data size, making it the most suitable format for enhancing the efficiency of federated learning, particularly in scenarios with restricted communication capabilities.

Abstract Image

通过联邦学习中的一次性模型修剪减少通信开销
在联合学习(一种协作但分散的机器学习方法)领域,通信效率是一个关键问题,特别是在带宽和资源有限的约束下。本文在此背景下评估了FedSNIP,一种利用SNIP(基于连接灵敏度的单次网络修剪)技术的新方法。通过使用SNIP, FedSNIP有效地修剪神经网络,将大量权重转换为零,并产生更稀疏的权重表示。重量密度的大幅降低显著减少了需要与服务器通信的参数量,从而降低了通信开销。我们在CIFAR-10和UCI-HAR数据集上的实验表明,FedSNIP不仅降低了客户端和服务器之间的数据传输,而且保持了与传统联邦学习模型相当的模型准确性。此外,我们分析了修剪后应用的各种压缩算法,特别是评估压缩稀疏行、坐标列表和压缩稀疏列格式,以确定最有效的方法。我们的研究结果表明,压缩稀疏行不仅可以更有效、更快速地压缩数据,而且可以最大限度地减少数据大小,使其成为提高联邦学习效率的最合适格式,特别是在通信能力有限的情况下。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Annals of Telecommunications
Annals of Telecommunications 工程技术-电信学
CiteScore
5.20
自引率
5.30%
发文量
37
审稿时长
4.5 months
期刊介绍: Annals of Telecommunications is an international journal publishing original peer-reviewed papers in the field of telecommunications. It covers all the essential branches of modern telecommunications, ranging from digital communications to communication networks and the internet, to software, protocols and services, uses and economics. This large spectrum of topics accounts for the rapid convergence through telecommunications of the underlying technologies in computers, communications, content management towards the emergence of the information and knowledge society. As a consequence, the Journal provides a medium for exchanging research results and technological achievements accomplished by the European and international scientific community from academia and industry.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信