{"title":"基于终端设备资源限制的边缘计算参数高效联邦学习","authors":"Ying Qian, Lianbo Ma","doi":"10.1109/IAI55780.2022.9976628","DOIUrl":null,"url":null,"abstract":"Federated learning is an emerging machine learning paradigm for privacy protection for data owners, without private user data leaving the devices. Massive data collection devices are distributed in an edge computing terminal, which provide a scenario for the application of federated learning. In this article, a new federated learning algorithm to edge computing, via using transfer learning technology, is proposed to address the challenges of small data samples and resource-poor devices faced by training of deep neural networks (DNNs) on end devices. Due to edge servers have enough resources to train a DNN model compared with edge devices, the algorithm trains the model on the cloud server by using public data sets and adds batch-normalization (BN) layer which only contains a small set of parameters as patch in the model. Then, edge devices download the pre-training model, the weights of which are fixed except the patch layers. The patch layers parameters are trained by using local data, which aggregate by the edge server.","PeriodicalId":138951,"journal":{"name":"2022 4th International Conference on Industrial Artificial Intelligence (IAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Parameter-Efficient Federated Learning for Edge Computing with End Devices Resource Limitation\",\"authors\":\"Ying Qian, Lianbo Ma\",\"doi\":\"10.1109/IAI55780.2022.9976628\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning is an emerging machine learning paradigm for privacy protection for data owners, without private user data leaving the devices. Massive data collection devices are distributed in an edge computing terminal, which provide a scenario for the application of federated learning. In this article, a new federated learning algorithm to edge computing, via using transfer learning technology, is proposed to address the challenges of small data samples and resource-poor devices faced by training of deep neural networks (DNNs) on end devices. Due to edge servers have enough resources to train a DNN model compared with edge devices, the algorithm trains the model on the cloud server by using public data sets and adds batch-normalization (BN) layer which only contains a small set of parameters as patch in the model. Then, edge devices download the pre-training model, the weights of which are fixed except the patch layers. The patch layers parameters are trained by using local data, which aggregate by the edge server.\",\"PeriodicalId\":138951,\"journal\":{\"name\":\"2022 4th International Conference on Industrial Artificial Intelligence (IAI)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 4th International Conference on Industrial Artificial Intelligence (IAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IAI55780.2022.9976628\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 4th International Conference on Industrial Artificial Intelligence (IAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IAI55780.2022.9976628","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Parameter-Efficient Federated Learning for Edge Computing with End Devices Resource Limitation
Federated learning is an emerging machine learning paradigm for privacy protection for data owners, without private user data leaving the devices. Massive data collection devices are distributed in an edge computing terminal, which provide a scenario for the application of federated learning. In this article, a new federated learning algorithm to edge computing, via using transfer learning technology, is proposed to address the challenges of small data samples and resource-poor devices faced by training of deep neural networks (DNNs) on end devices. Due to edge servers have enough resources to train a DNN model compared with edge devices, the algorithm trains the model on the cloud server by using public data sets and adds batch-normalization (BN) layer which only contains a small set of parameters as patch in the model. Then, edge devices download the pre-training model, the weights of which are fixed except the patch layers. The patch layers parameters are trained by using local data, which aggregate by the edge server.