基于Radon变换的个性化联邦学习领域知识解耦

IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zihao Lu, Junli Wang, Changjun Jiang
{"title":"基于Radon变换的个性化联邦学习领域知识解耦","authors":"Zihao Lu,&nbsp;Junli Wang,&nbsp;Changjun Jiang","doi":"10.1016/j.neucom.2025.130013","DOIUrl":null,"url":null,"abstract":"<div><div>Personalized federated learning (pFL) customizes local models to address heterogeneous data across clients. One prominent research direction in pFL is model decoupling, where the knowledge of a global model is selectively utilized to assist local model personalization. Prior studies primarily use decoupled global-model parameters to convey this selected knowledge. However, due to the task-related knowledge-mixing nature of deep learning models, using these parameters may introduce irrelevant knowledge to specific clients, impeding personalization. To address this, we propose a domain-wise knowledge decoupling approach (pFedDKD), which decouples global-model knowledge into diverse projection segments in the representation space, meeting the specific needs of clients on heterogeneous local domains. A Radon transform-based method is provided to facilitate this decoupling, enabling clients to extract relevant knowledge segments for personalization. Besides, we provide a distillation-based back-projection learning method to fuse local-model knowledge into the global model, ensuring the updated global-model knowledge remains decouplable by projection. A theoretical analysis confirms that our approach improves generalization. Extensive experiments on four datasets demonstrate that pFedDKD consistently outperforms eleven state-of-the-art baselines, achieving an average improvement of 1.21% in test accuracy over the best-performing baseline.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"635 ","pages":"Article 130013"},"PeriodicalIF":6.5000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Domain-wise knowledge decoupling for personalized federated learning via Radon transform\",\"authors\":\"Zihao Lu,&nbsp;Junli Wang,&nbsp;Changjun Jiang\",\"doi\":\"10.1016/j.neucom.2025.130013\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Personalized federated learning (pFL) customizes local models to address heterogeneous data across clients. One prominent research direction in pFL is model decoupling, where the knowledge of a global model is selectively utilized to assist local model personalization. Prior studies primarily use decoupled global-model parameters to convey this selected knowledge. However, due to the task-related knowledge-mixing nature of deep learning models, using these parameters may introduce irrelevant knowledge to specific clients, impeding personalization. To address this, we propose a domain-wise knowledge decoupling approach (pFedDKD), which decouples global-model knowledge into diverse projection segments in the representation space, meeting the specific needs of clients on heterogeneous local domains. A Radon transform-based method is provided to facilitate this decoupling, enabling clients to extract relevant knowledge segments for personalization. Besides, we provide a distillation-based back-projection learning method to fuse local-model knowledge into the global model, ensuring the updated global-model knowledge remains decouplable by projection. A theoretical analysis confirms that our approach improves generalization. Extensive experiments on four datasets demonstrate that pFedDKD consistently outperforms eleven state-of-the-art baselines, achieving an average improvement of 1.21% in test accuracy over the best-performing baseline.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"635 \",\"pages\":\"Article 130013\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2025-03-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S092523122500685X\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S092523122500685X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

个性化联邦学习(pFL)定制本地模型来处理跨客户机的异构数据。pFL中一个突出的研究方向是模型解耦,其中有选择地利用全局模型的知识来辅助局部模型个性化。先前的研究主要使用解耦的全局模型参数来传递这些选定的知识。然而,由于深度学习模型的任务相关知识混合特性,使用这些参数可能会向特定客户引入不相关的知识,从而阻碍个性化。为了解决这个问题,我们提出了一种领域知识解耦方法(pFedDKD),该方法将全局模型知识解耦到表示空间中的不同投影段中,以满足客户在异构局部域上的特定需求。提供了一种基于Radon转换的方法来促进这种解耦,使客户能够提取相关的知识段进行个性化。此外,我们还提供了一种基于提取的反向投影学习方法,将局部模型知识融合到全局模型中,确保更新的全局模型知识通过投影保持解耦性。理论分析证实,我们的方法提高了泛化。在四个数据集上进行的大量实验表明,pFedDKD始终优于11个最先进的基线,在测试精度上比最佳基线平均提高1.21%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Domain-wise knowledge decoupling for personalized federated learning via Radon transform
Personalized federated learning (pFL) customizes local models to address heterogeneous data across clients. One prominent research direction in pFL is model decoupling, where the knowledge of a global model is selectively utilized to assist local model personalization. Prior studies primarily use decoupled global-model parameters to convey this selected knowledge. However, due to the task-related knowledge-mixing nature of deep learning models, using these parameters may introduce irrelevant knowledge to specific clients, impeding personalization. To address this, we propose a domain-wise knowledge decoupling approach (pFedDKD), which decouples global-model knowledge into diverse projection segments in the representation space, meeting the specific needs of clients on heterogeneous local domains. A Radon transform-based method is provided to facilitate this decoupling, enabling clients to extract relevant knowledge segments for personalization. Besides, we provide a distillation-based back-projection learning method to fuse local-model knowledge into the global model, ensuring the updated global-model knowledge remains decouplable by projection. A theoretical analysis confirms that our approach improves generalization. Extensive experiments on four datasets demonstrate that pFedDKD consistently outperforms eleven state-of-the-art baselines, achieving an average improvement of 1.21% in test accuracy over the best-performing baseline.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信