Zhihao Ren;Xinglin Zhang;Wing W. Y. Ng;Junna Zhang
{"title":"Incentive Mechanism Design for Multi-Round Federated Learning With a Single Budget","authors":"Zhihao Ren;Xinglin Zhang;Wing W. Y. Ng;Junna Zhang","doi":"10.1109/TNSE.2024.3488719","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) is a popular distributed learning paradigm. In practical applications, FL faces two major challenges: (1) Participants inevitably incur computational and communication costs during training, which may discourage their participation; (2) The local data of participants is usually non-IID, which significantly affects the global model's performance. To address these challenges, in this paper, we model the FL incentive processas a budget-constrained cumulative quality maximization problem (BCQM). Unlike most existing works that focus on a single round of FL, BCQM fully encompasses the entire multi-round FL process with a single budget. Then, we propose a comprehensive incentive mechanism named \n<underline>R</u>\neverse \n<underline>A</u>\nuction for \n<underline>B</u>\nudget-constrained n\n<underline>O</u>\nn-IID fede\n<underline>R</u>\nated lear\n<underline>N</u>\ning (RABORN) to solve BCQM. RABORN covers the entire FL process while ensuring several desirable properties. We also prove RABORN's theoretical performance. Moreover, compared to baselines on real-world datasets, RABORN exhibits significant advantages. Specifically, on MNIST, Fashion-MNIST, and CIFAR-10, RABORN achieves final accuracies that are respectively 2.94%, 5.94%, and 21.75% higher than baselines. Correspondingly, when the final model accuracies on MNIST, Fashion-MNIST, and CIFAR-10 converge to 80%, 70%, and 40%, RABORN reduces communication rounds by over 33%, 45%, and 74% compared to baselines, while increasing the remaining budget by over 30%, 19%, and 130%, respectively.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 1","pages":"198-209"},"PeriodicalIF":6.7000,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Network Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10739914/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning (FL) is a popular distributed learning paradigm. In practical applications, FL faces two major challenges: (1) Participants inevitably incur computational and communication costs during training, which may discourage their participation; (2) The local data of participants is usually non-IID, which significantly affects the global model's performance. To address these challenges, in this paper, we model the FL incentive processas a budget-constrained cumulative quality maximization problem (BCQM). Unlike most existing works that focus on a single round of FL, BCQM fully encompasses the entire multi-round FL process with a single budget. Then, we propose a comprehensive incentive mechanism named
R
everse
A
uction for
B
udget-constrained n
O
n-IID fede
R
ated lear
N
ing (RABORN) to solve BCQM. RABORN covers the entire FL process while ensuring several desirable properties. We also prove RABORN's theoretical performance. Moreover, compared to baselines on real-world datasets, RABORN exhibits significant advantages. Specifically, on MNIST, Fashion-MNIST, and CIFAR-10, RABORN achieves final accuracies that are respectively 2.94%, 5.94%, and 21.75% higher than baselines. Correspondingly, when the final model accuracies on MNIST, Fashion-MNIST, and CIFAR-10 converge to 80%, 70%, and 40%, RABORN reduces communication rounds by over 33%, 45%, and 74% compared to baselines, while increasing the remaining budget by over 30%, 19%, and 130%, respectively.
期刊介绍:
The proposed journal, called the IEEE Transactions on Network Science and Engineering (TNSE), is committed to timely publishing of peer-reviewed technical articles that deal with the theory and applications of network science and the interconnections among the elements in a system that form a network. In particular, the IEEE Transactions on Network Science and Engineering publishes articles on understanding, prediction, and control of structures and behaviors of networks at the fundamental level. The types of networks covered include physical or engineered networks, information networks, biological networks, semantic networks, economic networks, social networks, and ecological networks. Aimed at discovering common principles that govern network structures, network functionalities and behaviors of networks, the journal seeks articles on understanding, prediction, and control of structures and behaviors of networks. Another trans-disciplinary focus of the IEEE Transactions on Network Science and Engineering is the interactions between and co-evolution of different genres of networks.