A Novel Incentive Mechanism for Federated Learning Over Wireless Communications

Yong Wang;Yu Zhou;Pei-Qiu Huang
{"title":"A Novel Incentive Mechanism for Federated Learning Over Wireless Communications","authors":"Yong Wang;Yu Zhou;Pei-Qiu Huang","doi":"10.1109/TAI.2024.3419757","DOIUrl":null,"url":null,"abstract":"This article studies a federated learning system over wireless communications, where a parameter server shares a global model trained by distributed devices. Due to limited communication resources, not all devices can participate in the training process. To encourage suitable devices to participate, this article proposes a novel incentive mechanism, where the parameter server assigns rewards to the devices, and the devices make participation decisions to maximize their overall profit based on the obtained rewards and their energy costs. Based on the interaction between the parameter server and the devices, the proposed incentive mechanism is formulated as a bilevel optimization problem (BOP), in which the upper level optimizes reward factors for the parameter server and the lower level makes participation decisions for the devices. Note that each device needs to make an independent participation decision due to limited communication resources and privacy concerns. To solve this BOP, a bilevel optimization approach called BIMFL is proposed. BIMFL adopts multiagent reinforcement learning (MARL) to make independent participation decisions with local information at the lower level, and introduces multiagent meta-reinforcement learning to accelerate the training by incorporating meta-learning into MARL. Moreover, BIMFL utilizes covariance matrix adaptation evolutionary strategy to optimize reward factors at the upper level. The effectiveness of BIMFL is demonstrated on different datasets using multilayer perceptron and convolutional neural networks.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 11","pages":"5561-5574"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10574861/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This article studies a federated learning system over wireless communications, where a parameter server shares a global model trained by distributed devices. Due to limited communication resources, not all devices can participate in the training process. To encourage suitable devices to participate, this article proposes a novel incentive mechanism, where the parameter server assigns rewards to the devices, and the devices make participation decisions to maximize their overall profit based on the obtained rewards and their energy costs. Based on the interaction between the parameter server and the devices, the proposed incentive mechanism is formulated as a bilevel optimization problem (BOP), in which the upper level optimizes reward factors for the parameter server and the lower level makes participation decisions for the devices. Note that each device needs to make an independent participation decision due to limited communication resources and privacy concerns. To solve this BOP, a bilevel optimization approach called BIMFL is proposed. BIMFL adopts multiagent reinforcement learning (MARL) to make independent participation decisions with local information at the lower level, and introduces multiagent meta-reinforcement learning to accelerate the training by incorporating meta-learning into MARL. Moreover, BIMFL utilizes covariance matrix adaptation evolutionary strategy to optimize reward factors at the upper level. The effectiveness of BIMFL is demonstrated on different datasets using multilayer perceptron and convolutional neural networks.
无线通信联合学习的新型激励机制
本文研究的是一种通过无线通信的联合学习系统,其中参数服务器共享由分布式设备训练的全局模型。由于通信资源有限,并非所有设备都能参与训练过程。为了鼓励合适的设备参与,本文提出了一种新颖的激励机制,即参数服务器为设备分配奖励,设备根据获得的奖励和能源成本做出参与决策,以实现整体利益最大化。基于参数服务器和设备之间的互动,本文提出的激励机制被表述为一个双层优化问题(BOP),其中上层优化参数服务器的奖励因素,下层优化设备的参与决策。需要注意的是,由于通信资源有限和隐私问题,每个设备都需要做出独立的参与决策。为解决这一 BOP 问题,我们提出了一种名为 BIMFL 的双层优化方法。BIMFL 采用多代理强化学习(MARL),利用下层的本地信息做出独立的参与决策,并引入多代理元强化学习,通过将元学习融入 MARL 来加速训练。此外,BIMFL 利用协方差矩阵适应进化策略来优化上层的奖励因子。在使用多层感知器和卷积神经网络的不同数据集上证明了 BIMFL 的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信