Personalized Federated Learning on long-tailed data via knowledge distillation and generated features

IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Fengling Lv , Pinxin Qian , Yang Lu , Hanzi Wang
{"title":"Personalized Federated Learning on long-tailed data via knowledge distillation and generated features","authors":"Fengling Lv ,&nbsp;Pinxin Qian ,&nbsp;Yang Lu ,&nbsp;Hanzi Wang","doi":"10.1016/j.patrec.2024.09.024","DOIUrl":null,"url":null,"abstract":"<div><div>Personalized Federated Learning (PFL) offers a novel paradigm for distributed learning, which aims to learn a personalized model for each client through collaborative training of all distributed clients in a privacy-preserving manner. However, the performance of personalized models is often compromised by data heterogeneity and the challenges of long-tailed distributions, both of which are common in real-world applications. In this paper, we explore the joint problem of data heterogeneity and long-tailed distribution in PFL and propose a corresponding solution called Personalized Federated Learning with Distillation and generated Features (PFLDF). Specifically, we employ a lightweight generator trained on the server to generate a balanced feature set for each client that can supplement local minority class information with global class information. This augmentation mechanism is a robust countermeasure against the adverse effects of data imbalance. Subsequently, we use knowledge distillation to transfer the knowledge of the global model to personalized models to improve their generalization performance. Extensive experimental results show the superiority of PFLDF compared to other state-of-the-art PFL methods with long-tailed data distribution.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"186 ","pages":"Pages 178-183"},"PeriodicalIF":3.9000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865524002800","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Personalized Federated Learning (PFL) offers a novel paradigm for distributed learning, which aims to learn a personalized model for each client through collaborative training of all distributed clients in a privacy-preserving manner. However, the performance of personalized models is often compromised by data heterogeneity and the challenges of long-tailed distributions, both of which are common in real-world applications. In this paper, we explore the joint problem of data heterogeneity and long-tailed distribution in PFL and propose a corresponding solution called Personalized Federated Learning with Distillation and generated Features (PFLDF). Specifically, we employ a lightweight generator trained on the server to generate a balanced feature set for each client that can supplement local minority class information with global class information. This augmentation mechanism is a robust countermeasure against the adverse effects of data imbalance. Subsequently, we use knowledge distillation to transfer the knowledge of the global model to personalized models to improve their generalization performance. Extensive experimental results show the superiority of PFLDF compared to other state-of-the-art PFL methods with long-tailed data distribution.
通过知识提炼和生成特征对长尾数据进行个性化联合学习
个性化联合学习(PFL)为分布式学习提供了一种新模式,其目的是通过对所有分布式客户端进行协同训练,以保护隐私的方式为每个客户端学习个性化模型。然而,个性化模型的性能往往会受到数据异质性和长尾分布挑战的影响,而这两种情况在实际应用中都很常见。在本文中,我们探讨了 PFL 中数据异质性和长尾分布的共同问题,并提出了相应的解决方案,即 "利用蒸馏和生成特征的个性化联合学习(PFLDF)"。具体来说,我们采用在服务器上训练的轻量级生成器,为每个客户端生成平衡的特征集,从而用全局类别信息补充本地少数类别信息。这种增强机制是应对数据不平衡不利影响的有力措施。随后,我们利用知识提炼技术将全局模型的知识转移到个性化模型中,以提高其泛化性能。广泛的实验结果表明,在长尾数据分布的情况下,PFLDF 与其他最先进的 PFL 方法相比更具优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Pattern Recognition Letters
Pattern Recognition Letters 工程技术-计算机:人工智能
CiteScore
12.40
自引率
5.90%
发文量
287
审稿时长
9.1 months
期刊介绍: Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition. Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信