Fangchao Yu, Lina Wang, Bo Zeng, Kai Zhao, Rongwei Yu
{"title":"通过知识提炼实现个性化和隐私增强型联合学习框架","authors":"Fangchao Yu, Lina Wang, Bo Zeng, Kai Zhao, Rongwei Yu","doi":"10.1016/j.neucom.2024.127290","DOIUrl":null,"url":null,"abstract":"<div><p><span>Federated learning is a </span>distributed learning framework in which all participants jointly train a global model to ensure data privacy. In the existing federated learning framework, all clients share the same global model and cannot customize the model architecture according to their needs. In this paper, we propose FLKD (federated learning with knowledge distillation), a personalized and privacy-enhanced federated learning framework. The global model will serve as a medium for knowledge transfer in FLKD, and the client can customize the local model while training with the global model by mutual learning. Furthermore, the participation of the heterogeneous local models changes the training strategy of the global model, which means that FLKD has a natural immune effect against gradient leakage attacks. We conduct extensive empirical experiments to support the training and evaluation of our framework. Results of experiments show that FLKD provides an effective way to solve the problem of model heterogeneity and can effectively defend against gradient leakage attacks.</p></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"575 ","pages":"Article 127290"},"PeriodicalIF":6.5000,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Personalized and privacy-enhanced federated learning framework via knowledge distillation\",\"authors\":\"Fangchao Yu, Lina Wang, Bo Zeng, Kai Zhao, Rongwei Yu\",\"doi\":\"10.1016/j.neucom.2024.127290\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p><span>Federated learning is a </span>distributed learning framework in which all participants jointly train a global model to ensure data privacy. In the existing federated learning framework, all clients share the same global model and cannot customize the model architecture according to their needs. In this paper, we propose FLKD (federated learning with knowledge distillation), a personalized and privacy-enhanced federated learning framework. The global model will serve as a medium for knowledge transfer in FLKD, and the client can customize the local model while training with the global model by mutual learning. Furthermore, the participation of the heterogeneous local models changes the training strategy of the global model, which means that FLKD has a natural immune effect against gradient leakage attacks. We conduct extensive empirical experiments to support the training and evaluation of our framework. Results of experiments show that FLKD provides an effective way to solve the problem of model heterogeneity and can effectively defend against gradient leakage attacks.</p></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"575 \",\"pages\":\"Article 127290\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2024-01-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231224000614\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224000614","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Personalized and privacy-enhanced federated learning framework via knowledge distillation
Federated learning is a distributed learning framework in which all participants jointly train a global model to ensure data privacy. In the existing federated learning framework, all clients share the same global model and cannot customize the model architecture according to their needs. In this paper, we propose FLKD (federated learning with knowledge distillation), a personalized and privacy-enhanced federated learning framework. The global model will serve as a medium for knowledge transfer in FLKD, and the client can customize the local model while training with the global model by mutual learning. Furthermore, the participation of the heterogeneous local models changes the training strategy of the global model, which means that FLKD has a natural immune effect against gradient leakage attacks. We conduct extensive empirical experiments to support the training and evaluation of our framework. Results of experiments show that FLKD provides an effective way to solve the problem of model heterogeneity and can effectively defend against gradient leakage attacks.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.