LCEFL: A Lightweight Contribution Evaluation Approach for Federated Learning

IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Jingjing Guo;Jiaxing Li;Zhiquan Liu;Yupeng Xiong;Yong Ma;Athanasios V. Vasilakos;Xinghua Li;Jianfeng Ma
{"title":"LCEFL: A Lightweight Contribution Evaluation Approach for Federated Learning","authors":"Jingjing Guo;Jiaxing Li;Zhiquan Liu;Yupeng Xiong;Yong Ma;Athanasios V. Vasilakos;Xinghua Li;Jianfeng Ma","doi":"10.1109/TMC.2025.3545140","DOIUrl":null,"url":null,"abstract":"The prerequisite for implementing incentive mechanisms and reliable participant selection schemes in federated learning is to obtain the contribution of each participant. Available evaluation methods for participant contributions require the server to possess a test dataset, often impractical. Additionally, the excessively high complexity of these works is unacceptable when training complex models in large-scale federated learning system. To address these issues, we propose a lightweight contribution evaluation method for federated learning participants, named LCEFL, based on model projection theory, which does not require the server to provide a test dataset. In addition, a model compression method is designed to be used in LCEFL to reduce the computational complexity. Furthermore, a trusted aggregation method based on LCEFL is proposed, where the weight of each participant's local model is determined by its trust level, which can be calculated using its contribution evaluation result. Experimental results show that LCEFL can achieve nearly the same accuracy as schemes based on Shapley Value, while significantly reducing computational overhead by more than 50%. Compared to available aggregation methods, the proposed trusted aggregation scheme is able to accelerate the convergence speed of the global model and improve its accuracy by 2% to 45%.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 7","pages":"6643-6657"},"PeriodicalIF":7.7000,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Mobile Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10902114/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

The prerequisite for implementing incentive mechanisms and reliable participant selection schemes in federated learning is to obtain the contribution of each participant. Available evaluation methods for participant contributions require the server to possess a test dataset, often impractical. Additionally, the excessively high complexity of these works is unacceptable when training complex models in large-scale federated learning system. To address these issues, we propose a lightweight contribution evaluation method for federated learning participants, named LCEFL, based on model projection theory, which does not require the server to provide a test dataset. In addition, a model compression method is designed to be used in LCEFL to reduce the computational complexity. Furthermore, a trusted aggregation method based on LCEFL is proposed, where the weight of each participant's local model is determined by its trust level, which can be calculated using its contribution evaluation result. Experimental results show that LCEFL can achieve nearly the same accuracy as schemes based on Shapley Value, while significantly reducing computational overhead by more than 50%. Compared to available aggregation methods, the proposed trusted aggregation scheme is able to accelerate the convergence speed of the global model and improve its accuracy by 2% to 45%.
lefl:联邦学习的轻量级贡献评估方法
在联邦学习中实施激励机制和可靠的参与者选择方案的前提是获得每个参与者的贡献。参与者贡献的可用评估方法要求服务器拥有一个测试数据集,这通常是不切实际的。此外,在大规模联邦学习系统中训练复杂模型时,这些工作的过高复杂性是不可接受的。为了解决这些问题,我们提出了一种轻量级的联邦学习参与者贡献评估方法,名为LCEFL,基于模型投影理论,它不需要服务器提供测试数据集。此外,设计了一种模型压缩方法用于lefl,以降低计算复杂度。在此基础上,提出了一种基于LCEFL的可信聚合方法,其中每个参与者的局部模型的权重由其信任水平决定,并利用其贡献评估结果计算其权重。实验结果表明,lefl可以达到与基于Shapley值的方案几乎相同的精度,同时显著减少了50%以上的计算开销。与现有的聚合方法相比,本文提出的可信聚合方案能够加快全局模型的收敛速度,将模型的精度提高2% ~ 45%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Mobile Computing
IEEE Transactions on Mobile Computing 工程技术-电信学
CiteScore
12.90
自引率
2.50%
发文量
403
审稿时长
6.6 months
期刊介绍: IEEE Transactions on Mobile Computing addresses key technical issues related to various aspects of mobile computing. This includes (a) architectures, (b) support services, (c) algorithm/protocol design and analysis, (d) mobile environments, (e) mobile communication systems, (f) applications, and (g) emerging technologies. Topics of interest span a wide range, covering aspects like mobile networks and hosts, mobility management, multimedia, operating system support, power management, online and mobile environments, security, scalability, reliability, and emerging technologies such as wearable computers, body area networks, and wireless sensor networks. The journal serves as a comprehensive platform for advancements in mobile computing research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信