Dynamic Adaptive Federated Learning on Local Long-Tailed Data

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Juncheng Pu;Xiaodong Fu;Hai Dong;Pengcheng Zhang;Li Liu
{"title":"Dynamic Adaptive Federated Learning on Local Long-Tailed Data","authors":"Juncheng Pu;Xiaodong Fu;Hai Dong;Pengcheng Zhang;Li Liu","doi":"10.1109/TSC.2024.3478796","DOIUrl":null,"url":null,"abstract":"Federated learning provides privacy protection to the collaborative training of global model based on distributed private data. The local private data is often in the presence of long-tailed distribution in reality, which downgrades the performance and causes biased results. In this paper, we propose a dynamic adaptive federated learning optimization algorithm with the Grey Wolf Optimizer and Markov Chain, named FedWolf, to solve the problems of performance degradation and result bias caused by the local long-tailed data. FedWolf is launched with a set of randomly initialized parameters instead of a shared parameter employed by existing methods. Then multi-level participants are elected based on the F1 scores calculated from the uploaded parameters. A dynamic weighting strategy based on the participant level is used to adaptively update parameters without artificial control. The above parameter updating is modelled as a Markov Process. After all communication rounds are completed, the future performance (including the probability of each participant is elected as different participant level) of participants is predicted through the historical Markov states. Finally, the probability of each participant is elected as the level 1 is used as the contribution weight and the global model is obtained through dynamic contribution weight aggregating. We introduce the Gini index to evaluate the bias of classification results. Extensive experiments are conducted to validate the effectiveness of FedWolf in solving the problems of performance cracks and categorization result bias as well as the robustness of adaptive parameter updating in resisting outliers and malicious users.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"17 6","pages":"3485-3498"},"PeriodicalIF":5.5000,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Services Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10713999/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Federated learning provides privacy protection to the collaborative training of global model based on distributed private data. The local private data is often in the presence of long-tailed distribution in reality, which downgrades the performance and causes biased results. In this paper, we propose a dynamic adaptive federated learning optimization algorithm with the Grey Wolf Optimizer and Markov Chain, named FedWolf, to solve the problems of performance degradation and result bias caused by the local long-tailed data. FedWolf is launched with a set of randomly initialized parameters instead of a shared parameter employed by existing methods. Then multi-level participants are elected based on the F1 scores calculated from the uploaded parameters. A dynamic weighting strategy based on the participant level is used to adaptively update parameters without artificial control. The above parameter updating is modelled as a Markov Process. After all communication rounds are completed, the future performance (including the probability of each participant is elected as different participant level) of participants is predicted through the historical Markov states. Finally, the probability of each participant is elected as the level 1 is used as the contribution weight and the global model is obtained through dynamic contribution weight aggregating. We introduce the Gini index to evaluate the bias of classification results. Extensive experiments are conducted to validate the effectiveness of FedWolf in solving the problems of performance cracks and categorization result bias as well as the robustness of adaptive parameter updating in resisting outliers and malicious users.
本地长尾数据的动态自适应联合学习
联邦学习为基于分布式私有数据的全局模型协同训练提供了隐私保护。在现实中,局部私有数据往往存在长尾分布,这降低了性能,导致结果偏倚。本文提出了一种基于灰狼优化器和马尔可夫链的动态自适应联邦学习优化算法,命名为FedWolf,以解决局部长尾数据导致的性能下降和结果偏差问题。FedWolf启动时使用一组随机初始化的参数,而不是现有方法使用的共享参数。然后根据上传的参数计算F1分数,选出多级参与者。采用基于参与者水平的动态加权策略,在不受人工控制的情况下自适应更新参数。上述参数的更新被建模为一个马尔可夫过程。在所有通信回合完成后,通过历史马尔可夫状态预测参与者的未来表现(包括每个参与者被选为不同参与者级别的概率)。最后,选取各参与者的概率作为第一级作为贡献权重,通过贡献权重的动态聚合得到全局模型。我们引入基尼指数来评价分类结果的偏倚。通过大量的实验验证了FedWolf在解决性能裂缝和分类结果偏差问题方面的有效性,以及自适应参数更新在抵抗异常值和恶意用户方面的鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Services Computing
IEEE Transactions on Services Computing COMPUTER SCIENCE, INFORMATION SYSTEMS-COMPUTER SCIENCE, SOFTWARE ENGINEERING
CiteScore
11.50
自引率
6.20%
发文量
278
审稿时长
>12 weeks
期刊介绍: IEEE Transactions on Services Computing encompasses the computing and software aspects of the science and technology of services innovation research and development. It places emphasis on algorithmic, mathematical, statistical, and computational methods central to services computing. Topics covered include Service Oriented Architecture, Web Services, Business Process Integration, Solution Performance Management, and Services Operations and Management. The transactions address mathematical foundations, security, privacy, agreement, contract, discovery, negotiation, collaboration, and quality of service for web services. It also covers areas like composite web service creation, business and scientific applications, standards, utility models, business process modeling, integration, collaboration, and more in the realm of Services Computing.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信