PGFed:个性化每个客户的全球目标,实现联合学习。

Jun Luo, Matias Mendieta, Chen Chen, Shandong Wu
{"title":"PGFed:个性化每个客户的全球目标,实现联合学习。","authors":"Jun Luo, Matias Mendieta, Chen Chen, Shandong Wu","doi":"10.1109/iccv51070.2023.00365","DOIUrl":null,"url":null,"abstract":"<p><p>Personalized federated learning has received an upsurge of attention due to the mediocre performance of conventional federated learning (FL) over heterogeneous data. Unlike conventional FL which trains a single global consensus model, personalized FL allows different models for different clients. However, existing personalized FL algorithms only <b>implicitly</b> transfer the collaborative knowledge across the federation by embedding the knowledge into the aggregated model or regularization. We observed that this implicit knowledge transfer fails to maximize the potential of each client's empirical risk toward other clients. Based on our observation, in this work, we propose <b>P</b>ersonalized <b>G</b>lobal <b>Fed</b>erated Learning (PGFed), a novel personalized FL framework that enables each client to <b>personalize</b> its own <b>global</b> objective by <b>explicitly</b> and adaptively aggregating the empirical risks of itself and other clients. To avoid massive <math><mrow><mrow><mo>(</mo><mrow><mi>O</mi><mrow><mo>(</mo><mrow><msup><mi>N</mi><mn>2</mn></msup></mrow><mo>)</mo></mrow></mrow><mo>)</mo></mrow></mrow></math> communication overhead and potential privacy leakage while achieving this, each client's risk is estimated through a first-order approximation for other clients' adaptive risk aggregation. On top of PGFed, we develop a momentum upgrade, dubbed PGFedMo, to more efficiently utilize clients' empirical risks. Our extensive experiments on four datasets under different federated settings show consistent improvements of PGFed over previous state-of-the-art methods. The code is publicly available at https://github.com/ljaiverson/pgfed.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2023 ","pages":"3923-3933"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11024864/pdf/","citationCount":"0","resultStr":"{\"title\":\"PGFed: Personalize Each Client's Global Objective for Federated Learning.\",\"authors\":\"Jun Luo, Matias Mendieta, Chen Chen, Shandong Wu\",\"doi\":\"10.1109/iccv51070.2023.00365\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Personalized federated learning has received an upsurge of attention due to the mediocre performance of conventional federated learning (FL) over heterogeneous data. Unlike conventional FL which trains a single global consensus model, personalized FL allows different models for different clients. However, existing personalized FL algorithms only <b>implicitly</b> transfer the collaborative knowledge across the federation by embedding the knowledge into the aggregated model or regularization. We observed that this implicit knowledge transfer fails to maximize the potential of each client's empirical risk toward other clients. Based on our observation, in this work, we propose <b>P</b>ersonalized <b>G</b>lobal <b>Fed</b>erated Learning (PGFed), a novel personalized FL framework that enables each client to <b>personalize</b> its own <b>global</b> objective by <b>explicitly</b> and adaptively aggregating the empirical risks of itself and other clients. To avoid massive <math><mrow><mrow><mo>(</mo><mrow><mi>O</mi><mrow><mo>(</mo><mrow><msup><mi>N</mi><mn>2</mn></msup></mrow><mo>)</mo></mrow></mrow><mo>)</mo></mrow></mrow></math> communication overhead and potential privacy leakage while achieving this, each client's risk is estimated through a first-order approximation for other clients' adaptive risk aggregation. On top of PGFed, we develop a momentum upgrade, dubbed PGFedMo, to more efficiently utilize clients' empirical risks. Our extensive experiments on four datasets under different federated settings show consistent improvements of PGFed over previous state-of-the-art methods. The code is publicly available at https://github.com/ljaiverson/pgfed.</p>\",\"PeriodicalId\":74564,\"journal\":{\"name\":\"Proceedings. IEEE International Conference on Computer Vision\",\"volume\":\"2023 \",\"pages\":\"3923-3933\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11024864/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. IEEE International Conference on Computer Vision\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/iccv51070.2023.00365\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/1/15 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. IEEE International Conference on Computer Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iccv51070.2023.00365","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/15 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

由于传统的联合学习(FL)在异构数据上表现平平,个性化联合学习受到了越来越多的关注。传统的联合学习只训练一个全局共识模型,与之不同的是,个性化联合学习允许不同的客户使用不同的模型。然而,现有的个性化联合学习算法只是通过将知识嵌入到聚合模型或正则化中来隐式地在联合学习中传递协作知识。我们发现,这种隐式知识转移无法最大限度地发挥每个客户对其他客户的经验风险潜力。基于我们的观察,在这项工作中,我们提出了个性化全局联盟学习(PGFed),这是一种新型的个性化 FL 框架,它能让每个客户端通过明确、自适应地聚合自身和其他客户端的经验风险来个性化自己的全局目标。为了避免大量(O(N2))通信开销和潜在的隐私泄露,每个客户端的风险都是通过对其他客户端的自适应风险聚合进行一阶近似来估算的。在 PGFed 的基础上,我们还开发了一种势头升级版,称为 PGFedMo,以更有效地利用客户的经验风险。我们在不同联盟设置下对四个数据集进行了广泛的实验,结果表明 PGFed 与之前最先进的方法相比有了持续的改进。代码可在 https://github.com/ljaiverson/pgfed 公开获取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
PGFed: Personalize Each Client's Global Objective for Federated Learning.

Personalized federated learning has received an upsurge of attention due to the mediocre performance of conventional federated learning (FL) over heterogeneous data. Unlike conventional FL which trains a single global consensus model, personalized FL allows different models for different clients. However, existing personalized FL algorithms only implicitly transfer the collaborative knowledge across the federation by embedding the knowledge into the aggregated model or regularization. We observed that this implicit knowledge transfer fails to maximize the potential of each client's empirical risk toward other clients. Based on our observation, in this work, we propose Personalized Global Federated Learning (PGFed), a novel personalized FL framework that enables each client to personalize its own global objective by explicitly and adaptively aggregating the empirical risks of itself and other clients. To avoid massive (O(N2)) communication overhead and potential privacy leakage while achieving this, each client's risk is estimated through a first-order approximation for other clients' adaptive risk aggregation. On top of PGFed, we develop a momentum upgrade, dubbed PGFedMo, to more efficiently utilize clients' empirical risks. Our extensive experiments on four datasets under different federated settings show consistent improvements of PGFed over previous state-of-the-art methods. The code is publicly available at https://github.com/ljaiverson/pgfed.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信