FedAL:通过对抗性学习实现黑盒子联邦知识蒸馏

Pengchao Han;Xingyan Shi;Jianwei Huang
{"title":"FedAL:通过对抗性学习实现黑盒子联邦知识蒸馏","authors":"Pengchao Han;Xingyan Shi;Jianwei Huang","doi":"10.1109/JSAC.2024.3431516","DOIUrl":null,"url":null,"abstract":"Knowledge distillation (KD) can enable collaborative learning among distributed clients that have different model architectures and do not share their local data and model parameters with others. Each client updates its local model using the average model output/feature of all client models as the target, known as federated KD. However, existing federated KD methods often do not perform well when clients’ local models are trained with heterogeneous local datasets. In this paper, we propose Federated knowledge distillation enabled by Adversarial Learning (\n<monospace>FedAL</monospace>\n) to address the data heterogeneity among clients. First, to alleviate the local model output divergence across clients caused by data heterogeneity, the server acts as a discriminator to guide clients’ local model training to achieve consensus model outputs among clients through a min-max game between clients and the discriminator. Moreover, catastrophic forgetting may happen during the clients’ local training and global knowledge transfer due to clients’ heterogeneous local data. Towards this challenge, we design the less-forgetting regularization for both local training and global knowledge transfer to guarantee clients’ ability to transfer/learn knowledge to/from others. Experimental results show that \n<monospace>FedAL</monospace>\n and its variants achieve higher accuracy than other federated KD baselines.","PeriodicalId":73294,"journal":{"name":"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society","volume":"42 11","pages":"3064-3077"},"PeriodicalIF":0.0000,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FedAL: Black-Box Federated Knowledge Distillation Enabled by Adversarial Learning\",\"authors\":\"Pengchao Han;Xingyan Shi;Jianwei Huang\",\"doi\":\"10.1109/JSAC.2024.3431516\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Knowledge distillation (KD) can enable collaborative learning among distributed clients that have different model architectures and do not share their local data and model parameters with others. Each client updates its local model using the average model output/feature of all client models as the target, known as federated KD. However, existing federated KD methods often do not perform well when clients’ local models are trained with heterogeneous local datasets. In this paper, we propose Federated knowledge distillation enabled by Adversarial Learning (\\n<monospace>FedAL</monospace>\\n) to address the data heterogeneity among clients. First, to alleviate the local model output divergence across clients caused by data heterogeneity, the server acts as a discriminator to guide clients’ local model training to achieve consensus model outputs among clients through a min-max game between clients and the discriminator. Moreover, catastrophic forgetting may happen during the clients’ local training and global knowledge transfer due to clients’ heterogeneous local data. Towards this challenge, we design the less-forgetting regularization for both local training and global knowledge transfer to guarantee clients’ ability to transfer/learn knowledge to/from others. Experimental results show that \\n<monospace>FedAL</monospace>\\n and its variants achieve higher accuracy than other federated KD baselines.\",\"PeriodicalId\":73294,\"journal\":{\"name\":\"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society\",\"volume\":\"42 11\",\"pages\":\"3064-3077\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10606337/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10606337/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

知识提炼(KD)可以使具有不同模型架构且不与他人共享本地数据和模型参数的分布式客户端之间进行协作学习。每个客户端以所有客户端模型的平均模型输出/特征为目标更新其本地模型,这就是所谓的联合 KD。然而,当客户机的本地模型使用异构本地数据集进行训练时,现有的联合 KD 方法往往表现不佳。在本文中,我们提出了由对抗学习(Adversarial Learning,FedAL)支持的联合知识提炼(Federated Knowledge Distillation enabled by Adversarial Learning,FedAL)来解决客户端之间的数据异构问题。首先,为缓解数据异构造成的客户端间本地模型输出差异,服务器作为判别器,指导客户端的本地模型训练,通过客户端与判别器之间的最小-最大博弈,实现客户端间的一致模型输出。此外,在客户端的本地训练和全局知识转移过程中,由于客户端的本地数据异构,可能会发生灾难性遗忘。针对这一挑战,我们为本地训练和全局知识转移设计了较少遗忘的正则化,以保证客户向/从他人转移/学习知识的能力。实验结果表明,与其他联合 KD 基线相比,FedAL 及其变体实现了更高的准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
FedAL: Black-Box Federated Knowledge Distillation Enabled by Adversarial Learning
Knowledge distillation (KD) can enable collaborative learning among distributed clients that have different model architectures and do not share their local data and model parameters with others. Each client updates its local model using the average model output/feature of all client models as the target, known as federated KD. However, existing federated KD methods often do not perform well when clients’ local models are trained with heterogeneous local datasets. In this paper, we propose Federated knowledge distillation enabled by Adversarial Learning ( FedAL ) to address the data heterogeneity among clients. First, to alleviate the local model output divergence across clients caused by data heterogeneity, the server acts as a discriminator to guide clients’ local model training to achieve consensus model outputs among clients through a min-max game between clients and the discriminator. Moreover, catastrophic forgetting may happen during the clients’ local training and global knowledge transfer due to clients’ heterogeneous local data. Towards this challenge, we design the less-forgetting regularization for both local training and global knowledge transfer to guarantee clients’ ability to transfer/learn knowledge to/from others. Experimental results show that FedAL and its variants achieve higher accuracy than other federated KD baselines.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信