José Á. Morell, Z. Dahi, F. Chicano, Gabriel Luque, E. Alba
{"title":"Optimising Communication Overhead in Federated Learning Using NSGA-II","authors":"José Á. Morell, Z. Dahi, F. Chicano, Gabriel Luque, E. Alba","doi":"10.48550/arXiv.2204.02183","DOIUrl":null,"url":null,"abstract":"Federated learning is a training paradigm according to which a server-based model is cooperatively trained using local models running on edge devices and ensuring data privacy. These devices exchange information that induces a substantial communication load, which jeopardises the functioning efficiency. The difficulty of reducing this overhead stands in achieving this without decreasing the model's efficiency (contradictory relation). To do so, many works investigated the compression of the pre/mid/post-trained models and the communication rounds, separately, although they jointly contribute to the communication overload. Our work aims at optimising communication overhead in federated learning by (I) modelling it as a multi-objective problem and (II) applying a multi-objective optimization algorithm (NSGA-II) to solve it. To the best of the author's knowledge, this is the first work that \\texttt{(I)} explores the add-in that evolutionary computation could bring for solving such a problem, and \\texttt{(II)} considers both the neuron and devices features together. We perform the experimentation by simulating a server/client architecture with 4 slaves. We investigate both convolutional and fully-connected neural networks with 12 and 3 layers, 887,530 and 33,400 weights, respectively. We conducted the validation on the \\texttt{MNIST} dataset containing 70,000 images. The experiments have shown that our proposal could reduce communication by 99% and maintain an accuracy equal to the one obtained by the FedAvg Algorithm that uses 100% of communications.","PeriodicalId":91839,"journal":{"name":"Applications of Evolutionary Computation : 17th European Conference, EvoApplications 2014, Granada, Spain, April 23-25, 2014 : revised selected papers. EvoApplications (Conference) (17th : 2014 : Granada, Spain)","volume":"55 1","pages":"317-333"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applications of Evolutionary Computation : 17th European Conference, EvoApplications 2014, Granada, Spain, April 23-25, 2014 : revised selected papers. EvoApplications (Conference) (17th : 2014 : Granada, Spain)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2204.02183","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Federated learning is a training paradigm according to which a server-based model is cooperatively trained using local models running on edge devices and ensuring data privacy. These devices exchange information that induces a substantial communication load, which jeopardises the functioning efficiency. The difficulty of reducing this overhead stands in achieving this without decreasing the model's efficiency (contradictory relation). To do so, many works investigated the compression of the pre/mid/post-trained models and the communication rounds, separately, although they jointly contribute to the communication overload. Our work aims at optimising communication overhead in federated learning by (I) modelling it as a multi-objective problem and (II) applying a multi-objective optimization algorithm (NSGA-II) to solve it. To the best of the author's knowledge, this is the first work that \texttt{(I)} explores the add-in that evolutionary computation could bring for solving such a problem, and \texttt{(II)} considers both the neuron and devices features together. We perform the experimentation by simulating a server/client architecture with 4 slaves. We investigate both convolutional and fully-connected neural networks with 12 and 3 layers, 887,530 and 33,400 weights, respectively. We conducted the validation on the \texttt{MNIST} dataset containing 70,000 images. The experiments have shown that our proposal could reduce communication by 99% and maintain an accuracy equal to the one obtained by the FedAvg Algorithm that uses 100% of communications.
联邦学习是一种训练范例,根据该范例,使用在边缘设备上运行的本地模型协作训练基于服务器的模型,并确保数据隐私。这些设备交换信息,导致大量的通信负载,从而危及功能效率。减少这种开销的困难在于在不降低模型效率(矛盾关系)的情况下实现这一点。为此,许多研究分别研究了训练前/训练中/训练后模型和通信回合的压缩,尽管它们共同导致了通信过载。我们的工作旨在通过(I)将其建模为多目标问题和(II)应用多目标优化算法(NSGA-II)来优化联邦学习中的通信开销。据作者所知,这是第一次\texttt{(1)}探索了进化计算可以为解决这类问题带来的附加组件,\texttt{(2)}同时考虑了神经元和设备的特征。我们通过模拟具有4个slave的服务器/客户机体系结构来执行实验。我们研究了卷积神经网络和全连接神经网络,它们分别有12层和3层,权重分别为887,530和33,400。我们在包含70,000张图像的\texttt{MNIST}数据集上进行了验证。实验表明,我们的建议可以减少99%的通信% and maintain an accuracy equal to the one obtained by the FedAvg Algorithm that uses 100% of communications.