联邦学习中的分块局部聚合策略

Haibing Zhao, Weiqin Tong, Xiaoli Zhi, Tong Liu
{"title":"联邦学习中的分块局部聚合策略","authors":"Haibing Zhao, Weiqin Tong, Xiaoli Zhi, Tong Liu","doi":"10.1109/ICTAI56018.2022.00014","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) is a distributed machine learning technology that trains models on large-scale distributed devices while keeping training data localized and privatized. However, in settings where data is distributed in a not independent and identically distributed (non-I.I.D.) fashion, the single joint model produced by FL suffers in terms of test set accuracy and communication costs. And a multi-layer topology are widely deployed for FL in real scenarios. Therefore, we propose FedBox, a chunked local aggregation federated learning framework to improve the generalization ability and aggregation efficiency of model in non-I.I.D. data by adapting to the topology of the real network. Moreover, we study the adaptive gradient descent (AGC) to mitigate the feature shift caused by training non-I.I.D. data. In this work, we modified the aggregation strategy of FL by introducing a virtual node layer based on local stochastic gradient methods (SGD), and separate the edge node cluster by the similarity between the local update model and the global update model. We show that FedBox can effectively improve convergence speed and test accuracy, while reducing communication cost. Training results on FederatedEMNIST, Cifar10, Cifar100 and Shakespeare datasets indicate that FedBox allows model training to converge in fewer communication rounds and improves training accuracy by up to 3.1% compared with FedAVG. In addition, we make an empirical analysis of the extended range of virtual nodes.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Chunked Local Aggregation Strategy in Federated Learning\",\"authors\":\"Haibing Zhao, Weiqin Tong, Xiaoli Zhi, Tong Liu\",\"doi\":\"10.1109/ICTAI56018.2022.00014\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Learning (FL) is a distributed machine learning technology that trains models on large-scale distributed devices while keeping training data localized and privatized. However, in settings where data is distributed in a not independent and identically distributed (non-I.I.D.) fashion, the single joint model produced by FL suffers in terms of test set accuracy and communication costs. And a multi-layer topology are widely deployed for FL in real scenarios. Therefore, we propose FedBox, a chunked local aggregation federated learning framework to improve the generalization ability and aggregation efficiency of model in non-I.I.D. data by adapting to the topology of the real network. Moreover, we study the adaptive gradient descent (AGC) to mitigate the feature shift caused by training non-I.I.D. data. In this work, we modified the aggregation strategy of FL by introducing a virtual node layer based on local stochastic gradient methods (SGD), and separate the edge node cluster by the similarity between the local update model and the global update model. We show that FedBox can effectively improve convergence speed and test accuracy, while reducing communication cost. Training results on FederatedEMNIST, Cifar10, Cifar100 and Shakespeare datasets indicate that FedBox allows model training to converge in fewer communication rounds and improves training accuracy by up to 3.1% compared with FedAVG. In addition, we make an empirical analysis of the extended range of virtual nodes.\",\"PeriodicalId\":354314,\"journal\":{\"name\":\"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICTAI56018.2022.00014\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI56018.2022.00014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习(FL)是一种分布式机器学习技术,它可以在大规模分布式设备上训练模型,同时保持训练数据的本地化和私人化。然而,在数据以非独立和同分布(非i.i.d)方式分布的情况下,FL产生的单一联合模型在测试集准确性和通信成本方面受到影响。在实际应用中,多层拓扑结构被广泛采用。为此,我们提出了一种分块局部聚合联邦学习框架FedBox,以提高模型在非i.i.d中的泛化能力和聚合效率。数据通过适应真实网络的拓扑结构。此外,我们还研究了自适应梯度下降(AGC)来缓解训练非i.i.d所引起的特征偏移。数据。本文通过引入基于局部随机梯度方法(SGD)的虚拟节点层来改进FL的聚集策略,并利用局部更新模型和全局更新模型之间的相似性来分离边缘节点簇。实验表明,FedBox可以有效地提高收敛速度和测试精度,同时降低通信成本。在FederatedEMNIST、Cifar10、Cifar100和Shakespeare数据集上的训练结果表明,FedBox允许模型训练在更少的通信轮中收敛,与FedBox相比,训练精度提高了3.1%。此外,我们还对虚拟节点的扩展范围进行了实证分析。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Chunked Local Aggregation Strategy in Federated Learning
Federated Learning (FL) is a distributed machine learning technology that trains models on large-scale distributed devices while keeping training data localized and privatized. However, in settings where data is distributed in a not independent and identically distributed (non-I.I.D.) fashion, the single joint model produced by FL suffers in terms of test set accuracy and communication costs. And a multi-layer topology are widely deployed for FL in real scenarios. Therefore, we propose FedBox, a chunked local aggregation federated learning framework to improve the generalization ability and aggregation efficiency of model in non-I.I.D. data by adapting to the topology of the real network. Moreover, we study the adaptive gradient descent (AGC) to mitigate the feature shift caused by training non-I.I.D. data. In this work, we modified the aggregation strategy of FL by introducing a virtual node layer based on local stochastic gradient methods (SGD), and separate the edge node cluster by the similarity between the local update model and the global update model. We show that FedBox can effectively improve convergence speed and test accuracy, while reducing communication cost. Training results on FederatedEMNIST, Cifar10, Cifar100 and Shakespeare datasets indicate that FedBox allows model training to converge in fewer communication rounds and improves training accuracy by up to 3.1% compared with FedAVG. In addition, we make an empirical analysis of the extended range of virtual nodes.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信