{"title":"A Hybrid-Update Efficient Federated Learning Method Based on Multi-Teacher Knowledge Distillation in the Internet of Things","authors":"Yang Lan, Lixiang Li, Haipeng Peng","doi":"10.1002/cpe.70256","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>The emergence of federated learning (FL) provides a new learning paradigm for private protection of data in the Internet of Things (IoT). However, it takes a lot of time for the server to obtain a global model with superior performance, which restricts the development of FL in the IoT. Therefore, this paper proposes a hybrid-update efficient federated learning method based on multi-teacher knowledge distillation in the Internet of Things. Firstly, considering the local training of each client, we design a data separation method of divide and conquer, which transforms data separation into a many-objective solution problem with constraints, and the unseparated data is used to train local models to speed up the training of the local model. Then, to alleviate the adverse effects of the above method, we introduce the knowledge distillation technology, and a multi-teacher model is designed for separated data. The teacher models are trained in advance, and they pass on their respective professional knowledge to the local models during the FL process. In the communication between the clients and the server, we only pass part of the model weights to further improve the overall efficiency of FL. To mitigate the impact of the above process, this paper proposes a hybrid-update federated learning strategy, which divides the update of the global model into federated aggregation update and generative weights update to improve the performance. Finally, we use the MNIST dataset, fashion-MNIST dataset, GTSRB dataset, SVHN dataset, and 20 Newsgroups dataset to simulate non-independent and identically distributed (non-IID) scenarios, and many experiments are performed to verify the effectiveness of the proposed method. Our method improves the overall efficiency of FL and further promotes the development of the IoT.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 25-26","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70256","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
The emergence of federated learning (FL) provides a new learning paradigm for private protection of data in the Internet of Things (IoT). However, it takes a lot of time for the server to obtain a global model with superior performance, which restricts the development of FL in the IoT. Therefore, this paper proposes a hybrid-update efficient federated learning method based on multi-teacher knowledge distillation in the Internet of Things. Firstly, considering the local training of each client, we design a data separation method of divide and conquer, which transforms data separation into a many-objective solution problem with constraints, and the unseparated data is used to train local models to speed up the training of the local model. Then, to alleviate the adverse effects of the above method, we introduce the knowledge distillation technology, and a multi-teacher model is designed for separated data. The teacher models are trained in advance, and they pass on their respective professional knowledge to the local models during the FL process. In the communication between the clients and the server, we only pass part of the model weights to further improve the overall efficiency of FL. To mitigate the impact of the above process, this paper proposes a hybrid-update federated learning strategy, which divides the update of the global model into federated aggregation update and generative weights update to improve the performance. Finally, we use the MNIST dataset, fashion-MNIST dataset, GTSRB dataset, SVHN dataset, and 20 Newsgroups dataset to simulate non-independent and identically distributed (non-IID) scenarios, and many experiments are performed to verify the effectiveness of the proposed method. Our method improves the overall efficiency of FL and further promotes the development of the IoT.
期刊介绍:
Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of:
Parallel and distributed computing;
High-performance computing;
Computational and data science;
Artificial intelligence and machine learning;
Big data applications, algorithms, and systems;
Network science;
Ontologies and semantics;
Security and privacy;
Cloud/edge/fog computing;
Green computing; and
Quantum computing.