面向跨设备联邦学习的分层知识蒸馏

Huy Q. Le, Loc X. Nguyen, Seong-Bae Park, C. Hong
{"title":"面向跨设备联邦学习的分层知识蒸馏","authors":"Huy Q. Le, Loc X. Nguyen, Seong-Bae Park, C. Hong","doi":"10.1109/ICOIN56518.2023.10049011","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) has been proposed as a decentralized machine learning system where multiple clients jointly train the model without sharing private data. In FL, the statistical heterogeneity among devices has become a crucial challenge, which can cause degradation in generalization performance. Previous FL approaches have proven that leveraging the proximal regularization at the local training process can alleviate the divergence of parameter aggregation from biased local models. In this work, to address the heterogeneity issues in conventional FL, we propose a layer-wise knowledge distillation method in federated learning, namely, FedLKD, which regularizes the local training step via the knowledge distillation scheme between global and local models utilizing the small proxy dataset. Hence, FedLKD deploys the layer-wise knowledge distillation of the multiple devices and the global server as the clients’ regularized loss function. A layer-wise knowledge distillation mechanism is introduced to update the local model to exploit the common representation from different layers. Through extensive experiments, we demonstrate that FedLKD outperforms the vanilla FedAvg and FedProx on three federated datasets.","PeriodicalId":285763,"journal":{"name":"2023 International Conference on Information Networking (ICOIN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Layer-wise Knowledge Distillation for Cross-Device Federated Learning\",\"authors\":\"Huy Q. Le, Loc X. Nguyen, Seong-Bae Park, C. Hong\",\"doi\":\"10.1109/ICOIN56518.2023.10049011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Learning (FL) has been proposed as a decentralized machine learning system where multiple clients jointly train the model without sharing private data. In FL, the statistical heterogeneity among devices has become a crucial challenge, which can cause degradation in generalization performance. Previous FL approaches have proven that leveraging the proximal regularization at the local training process can alleviate the divergence of parameter aggregation from biased local models. In this work, to address the heterogeneity issues in conventional FL, we propose a layer-wise knowledge distillation method in federated learning, namely, FedLKD, which regularizes the local training step via the knowledge distillation scheme between global and local models utilizing the small proxy dataset. Hence, FedLKD deploys the layer-wise knowledge distillation of the multiple devices and the global server as the clients’ regularized loss function. A layer-wise knowledge distillation mechanism is introduced to update the local model to exploit the common representation from different layers. Through extensive experiments, we demonstrate that FedLKD outperforms the vanilla FedAvg and FedProx on three federated datasets.\",\"PeriodicalId\":285763,\"journal\":{\"name\":\"2023 International Conference on Information Networking (ICOIN)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 International Conference on Information Networking (ICOIN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICOIN56518.2023.10049011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Information Networking (ICOIN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOIN56518.2023.10049011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习(FL)是一种分散的机器学习系统,其中多个客户端在不共享私有数据的情况下共同训练模型。在FL中,设备之间的统计异质性已经成为一个关键的挑战,它可能导致泛化性能的下降。先前的FL方法已经证明,在局部训练过程中利用近端正则化可以减轻有偏差的局部模型的参数聚集的分歧。在这项工作中,为了解决传统FL中的异构问题,我们提出了一种分层知识蒸馏方法,即FedLKD,该方法利用小型代理数据集,通过全局模型和局部模型之间的知识蒸馏方案来正则化局部训练步骤。因此,FedLKD将多个设备和全局服务器的分层知识蒸馏部署为客户端的正则化损失函数。引入了一种分层知识蒸馏机制来更新局部模型,以利用来自不同层的通用表示。通过大量的实验,我们证明了fedkd在三个联邦数据集上优于传统的fedag和FedProx。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Layer-wise Knowledge Distillation for Cross-Device Federated Learning
Federated Learning (FL) has been proposed as a decentralized machine learning system where multiple clients jointly train the model without sharing private data. In FL, the statistical heterogeneity among devices has become a crucial challenge, which can cause degradation in generalization performance. Previous FL approaches have proven that leveraging the proximal regularization at the local training process can alleviate the divergence of parameter aggregation from biased local models. In this work, to address the heterogeneity issues in conventional FL, we propose a layer-wise knowledge distillation method in federated learning, namely, FedLKD, which regularizes the local training step via the knowledge distillation scheme between global and local models utilizing the small proxy dataset. Hence, FedLKD deploys the layer-wise knowledge distillation of the multiple devices and the global server as the clients’ regularized loss function. A layer-wise knowledge distillation mechanism is introduced to update the local model to exploit the common representation from different layers. Through extensive experiments, we demonstrate that FedLKD outperforms the vanilla FedAvg and FedProx on three federated datasets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信