物联网中基于多教师知识蒸馏的混合更新高效联邦学习方法

IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Yang Lan, Lixiang Li, Haipeng Peng
{"title":"物联网中基于多教师知识蒸馏的混合更新高效联邦学习方法","authors":"Yang Lan,&nbsp;Lixiang Li,&nbsp;Haipeng Peng","doi":"10.1002/cpe.70256","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>The emergence of federated learning (FL) provides a new learning paradigm for private protection of data in the Internet of Things (IoT). However, it takes a lot of time for the server to obtain a global model with superior performance, which restricts the development of FL in the IoT. Therefore, this paper proposes a hybrid-update efficient federated learning method based on multi-teacher knowledge distillation in the Internet of Things. Firstly, considering the local training of each client, we design a data separation method of divide and conquer, which transforms data separation into a many-objective solution problem with constraints, and the unseparated data is used to train local models to speed up the training of the local model. Then, to alleviate the adverse effects of the above method, we introduce the knowledge distillation technology, and a multi-teacher model is designed for separated data. The teacher models are trained in advance, and they pass on their respective professional knowledge to the local models during the FL process. In the communication between the clients and the server, we only pass part of the model weights to further improve the overall efficiency of FL. To mitigate the impact of the above process, this paper proposes a hybrid-update federated learning strategy, which divides the update of the global model into federated aggregation update and generative weights update to improve the performance. Finally, we use the MNIST dataset, fashion-MNIST dataset, GTSRB dataset, SVHN dataset, and 20 Newsgroups dataset to simulate non-independent and identically distributed (non-IID) scenarios, and many experiments are performed to verify the effectiveness of the proposed method. Our method improves the overall efficiency of FL and further promotes the development of the IoT.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 25-26","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Hybrid-Update Efficient Federated Learning Method Based on Multi-Teacher Knowledge Distillation in the Internet of Things\",\"authors\":\"Yang Lan,&nbsp;Lixiang Li,&nbsp;Haipeng Peng\",\"doi\":\"10.1002/cpe.70256\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>The emergence of federated learning (FL) provides a new learning paradigm for private protection of data in the Internet of Things (IoT). However, it takes a lot of time for the server to obtain a global model with superior performance, which restricts the development of FL in the IoT. Therefore, this paper proposes a hybrid-update efficient federated learning method based on multi-teacher knowledge distillation in the Internet of Things. Firstly, considering the local training of each client, we design a data separation method of divide and conquer, which transforms data separation into a many-objective solution problem with constraints, and the unseparated data is used to train local models to speed up the training of the local model. Then, to alleviate the adverse effects of the above method, we introduce the knowledge distillation technology, and a multi-teacher model is designed for separated data. The teacher models are trained in advance, and they pass on their respective professional knowledge to the local models during the FL process. In the communication between the clients and the server, we only pass part of the model weights to further improve the overall efficiency of FL. To mitigate the impact of the above process, this paper proposes a hybrid-update federated learning strategy, which divides the update of the global model into federated aggregation update and generative weights update to improve the performance. Finally, we use the MNIST dataset, fashion-MNIST dataset, GTSRB dataset, SVHN dataset, and 20 Newsgroups dataset to simulate non-independent and identically distributed (non-IID) scenarios, and many experiments are performed to verify the effectiveness of the proposed method. Our method improves the overall efficiency of FL and further promotes the development of the IoT.</p>\\n </div>\",\"PeriodicalId\":55214,\"journal\":{\"name\":\"Concurrency and Computation-Practice & Experience\",\"volume\":\"37 25-26\",\"pages\":\"\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2025-09-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Concurrency and Computation-Practice & Experience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70256\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70256","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习(FL)的出现为物联网(IoT)数据的私有保护提供了一种新的学习范式。但是,服务器需要花费大量的时间才能获得性能优越的全局模型,这限制了FL在物联网中的发展。为此,本文提出了一种基于多教师知识蒸馏的物联网混合更新高效联邦学习方法。首先,考虑到每个客户端的局部训练情况,设计了一种分而治之的数据分离方法,将数据分离转化为带约束的多目标解问题,利用未分离的数据训练局部模型,加快局部模型的训练速度;然后,为了缓解上述方法的不利影响,我们引入了知识蒸馏技术,并针对分离的数据设计了一个多教师模型。教师模型是经过事先培训的,他们在FL过程中将各自的专业知识传授给当地模型。在客户端和服务器之间的通信中,我们只传递部分模型权值,以进一步提高FL的整体效率。为了减轻上述过程的影响,本文提出了一种混合更新联邦学习策略,将全局模型的更新分为联邦聚合更新和生成权值更新,以提高性能。最后,我们使用MNIST数据集、fashion-MNIST数据集、GTSRB数据集、SVHN数据集和20新闻组数据集模拟了非独立和同分布(non-IID)场景,并进行了大量实验验证了所提出方法的有效性。我们的方法提高了FL的整体效率,进一步促进了物联网的发展。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

A Hybrid-Update Efficient Federated Learning Method Based on Multi-Teacher Knowledge Distillation in the Internet of Things

A Hybrid-Update Efficient Federated Learning Method Based on Multi-Teacher Knowledge Distillation in the Internet of Things

The emergence of federated learning (FL) provides a new learning paradigm for private protection of data in the Internet of Things (IoT). However, it takes a lot of time for the server to obtain a global model with superior performance, which restricts the development of FL in the IoT. Therefore, this paper proposes a hybrid-update efficient federated learning method based on multi-teacher knowledge distillation in the Internet of Things. Firstly, considering the local training of each client, we design a data separation method of divide and conquer, which transforms data separation into a many-objective solution problem with constraints, and the unseparated data is used to train local models to speed up the training of the local model. Then, to alleviate the adverse effects of the above method, we introduce the knowledge distillation technology, and a multi-teacher model is designed for separated data. The teacher models are trained in advance, and they pass on their respective professional knowledge to the local models during the FL process. In the communication between the clients and the server, we only pass part of the model weights to further improve the overall efficiency of FL. To mitigate the impact of the above process, this paper proposes a hybrid-update federated learning strategy, which divides the update of the global model into federated aggregation update and generative weights update to improve the performance. Finally, we use the MNIST dataset, fashion-MNIST dataset, GTSRB dataset, SVHN dataset, and 20 Newsgroups dataset to simulate non-independent and identically distributed (non-IID) scenarios, and many experiments are performed to verify the effectiveness of the proposed method. Our method improves the overall efficiency of FL and further promotes the development of the IoT.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Concurrency and Computation-Practice & Experience
Concurrency and Computation-Practice & Experience 工程技术-计算机:理论方法
CiteScore
5.00
自引率
10.00%
发文量
664
审稿时长
9.6 months
期刊介绍: Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of: Parallel and distributed computing; High-performance computing; Computational and data science; Artificial intelligence and machine learning; Big data applications, algorithms, and systems; Network science; Ontologies and semantics; Security and privacy; Cloud/edge/fog computing; Green computing; and Quantum computing.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信