基于终端设备资源限制的边缘计算参数高效联邦学习

Ying Qian, Lianbo Ma
{"title":"基于终端设备资源限制的边缘计算参数高效联邦学习","authors":"Ying Qian, Lianbo Ma","doi":"10.1109/IAI55780.2022.9976628","DOIUrl":null,"url":null,"abstract":"Federated learning is an emerging machine learning paradigm for privacy protection for data owners, without private user data leaving the devices. Massive data collection devices are distributed in an edge computing terminal, which provide a scenario for the application of federated learning. In this article, a new federated learning algorithm to edge computing, via using transfer learning technology, is proposed to address the challenges of small data samples and resource-poor devices faced by training of deep neural networks (DNNs) on end devices. Due to edge servers have enough resources to train a DNN model compared with edge devices, the algorithm trains the model on the cloud server by using public data sets and adds batch-normalization (BN) layer which only contains a small set of parameters as patch in the model. Then, edge devices download the pre-training model, the weights of which are fixed except the patch layers. The patch layers parameters are trained by using local data, which aggregate by the edge server.","PeriodicalId":138951,"journal":{"name":"2022 4th International Conference on Industrial Artificial Intelligence (IAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Parameter-Efficient Federated Learning for Edge Computing with End Devices Resource Limitation\",\"authors\":\"Ying Qian, Lianbo Ma\",\"doi\":\"10.1109/IAI55780.2022.9976628\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning is an emerging machine learning paradigm for privacy protection for data owners, without private user data leaving the devices. Massive data collection devices are distributed in an edge computing terminal, which provide a scenario for the application of federated learning. In this article, a new federated learning algorithm to edge computing, via using transfer learning technology, is proposed to address the challenges of small data samples and resource-poor devices faced by training of deep neural networks (DNNs) on end devices. Due to edge servers have enough resources to train a DNN model compared with edge devices, the algorithm trains the model on the cloud server by using public data sets and adds batch-normalization (BN) layer which only contains a small set of parameters as patch in the model. Then, edge devices download the pre-training model, the weights of which are fixed except the patch layers. The patch layers parameters are trained by using local data, which aggregate by the edge server.\",\"PeriodicalId\":138951,\"journal\":{\"name\":\"2022 4th International Conference on Industrial Artificial Intelligence (IAI)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 4th International Conference on Industrial Artificial Intelligence (IAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IAI55780.2022.9976628\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 4th International Conference on Industrial Artificial Intelligence (IAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IAI55780.2022.9976628","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习是一种新兴的机器学习范例,用于保护数据所有者的隐私,而不会让私人用户数据离开设备。海量数据采集设备分布在边缘计算终端中,为联邦学习的应用提供了场景。在本文中,提出了一种新的边缘计算联合学习算法,通过使用迁移学习技术,以解决在终端设备上训练深度神经网络(dnn)所面临的小数据样本和资源贫乏设备的挑战。由于与边缘设备相比,边缘服务器有足够的资源来训练DNN模型,因此该算法在云服务器上使用公共数据集训练模型,并在模型中添加只包含少量参数集作为patch的批处理归一化(batch-normalization, BN)层。然后,边缘设备下载预训练模型,除补丁层外,预训练模型的权值是固定的。利用边缘服务器聚合的局部数据训练补丁层参数。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Parameter-Efficient Federated Learning for Edge Computing with End Devices Resource Limitation
Federated learning is an emerging machine learning paradigm for privacy protection for data owners, without private user data leaving the devices. Massive data collection devices are distributed in an edge computing terminal, which provide a scenario for the application of federated learning. In this article, a new federated learning algorithm to edge computing, via using transfer learning technology, is proposed to address the challenges of small data samples and resource-poor devices faced by training of deep neural networks (DNNs) on end devices. Due to edge servers have enough resources to train a DNN model compared with edge devices, the algorithm trains the model on the cloud server by using public data sets and adds batch-normalization (BN) layer which only contains a small set of parameters as patch in the model. Then, edge devices download the pre-training model, the weights of which are fixed except the patch layers. The patch layers parameters are trained by using local data, which aggregate by the edge server.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信