Jun Wen, Xiusheng Li, Yupeng Chen, Xiaoli Li, Hang Mao
{"title":"联盟学习中资源受限边缘设备的有效方法","authors":"Jun Wen, Xiusheng Li, Yupeng Chen, Xiaoli Li, Hang Mao","doi":"10.1155/2024/8860376","DOIUrl":null,"url":null,"abstract":"<div>\n <p>Federated learning (FL) is a novel approach to privacy-preserving machine learning, enabling remote devices to collaborate on model training without exchanging data among clients. However, it faces several challenges, including limited client-side processing capabilities and non-IID data distributions. To address these challenges, we propose a partitioned FL architecture that a large CNN is divided into smaller networks, which train concurrently with other clients. Within a cluster, multiple clients concurrently train the ensemble model. The Jensen–Shannon divergence quantifies the similarity of predictions across submodels. To address discrepancies in model parameters between local and global models caused by data distribution, we propose an ensemble learning method that integrates a penalty term into the local model’s loss calculation, thereby ensuring synchronization. This method amalgamates predictions and losses across multiple submodels, effectively mitigating accuracy loss during the integration process. Extensive experiments with various Dirichlet parameters demonstrate that our system achieves accelerated convergence and enhanced performance on the CIFAR-10 and CIFAR-100 image classification tasks while remaining robust to partial participation, diverse datasets, and numerous clients. On the CIFAR-10 dataset, our method outperforms FedAvg, FedProx, and SplitFed by 6%–8%; in contrast, it outperforms them by 12%–18% on CIFAR-100.</p>\n </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/8860376","citationCount":"0","resultStr":"{\"title\":\"An Effective Approach for Resource-Constrained Edge Devices in Federated Learning\",\"authors\":\"Jun Wen, Xiusheng Li, Yupeng Chen, Xiaoli Li, Hang Mao\",\"doi\":\"10.1155/2024/8860376\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n <p>Federated learning (FL) is a novel approach to privacy-preserving machine learning, enabling remote devices to collaborate on model training without exchanging data among clients. However, it faces several challenges, including limited client-side processing capabilities and non-IID data distributions. To address these challenges, we propose a partitioned FL architecture that a large CNN is divided into smaller networks, which train concurrently with other clients. Within a cluster, multiple clients concurrently train the ensemble model. The Jensen–Shannon divergence quantifies the similarity of predictions across submodels. To address discrepancies in model parameters between local and global models caused by data distribution, we propose an ensemble learning method that integrates a penalty term into the local model’s loss calculation, thereby ensuring synchronization. This method amalgamates predictions and losses across multiple submodels, effectively mitigating accuracy loss during the integration process. Extensive experiments with various Dirichlet parameters demonstrate that our system achieves accelerated convergence and enhanced performance on the CIFAR-10 and CIFAR-100 image classification tasks while remaining robust to partial participation, diverse datasets, and numerous clients. On the CIFAR-10 dataset, our method outperforms FedAvg, FedProx, and SplitFed by 6%–8%; in contrast, it outperforms them by 12%–18% on CIFAR-100.</p>\\n </div>\",\"PeriodicalId\":14089,\"journal\":{\"name\":\"International Journal of Intelligent Systems\",\"volume\":\"2024 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2024-11-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/8860376\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1155/2024/8860376\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/2024/8860376","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
An Effective Approach for Resource-Constrained Edge Devices in Federated Learning
Federated learning (FL) is a novel approach to privacy-preserving machine learning, enabling remote devices to collaborate on model training without exchanging data among clients. However, it faces several challenges, including limited client-side processing capabilities and non-IID data distributions. To address these challenges, we propose a partitioned FL architecture that a large CNN is divided into smaller networks, which train concurrently with other clients. Within a cluster, multiple clients concurrently train the ensemble model. The Jensen–Shannon divergence quantifies the similarity of predictions across submodels. To address discrepancies in model parameters between local and global models caused by data distribution, we propose an ensemble learning method that integrates a penalty term into the local model’s loss calculation, thereby ensuring synchronization. This method amalgamates predictions and losses across multiple submodels, effectively mitigating accuracy loss during the integration process. Extensive experiments with various Dirichlet parameters demonstrate that our system achieves accelerated convergence and enhanced performance on the CIFAR-10 and CIFAR-100 image classification tasks while remaining robust to partial participation, diverse datasets, and numerous clients. On the CIFAR-10 dataset, our method outperforms FedAvg, FedProx, and SplitFed by 6%–8%; in contrast, it outperforms them by 12%–18% on CIFAR-100.
期刊介绍:
The International Journal of Intelligent Systems serves as a forum for individuals interested in tapping into the vast theories based on intelligent systems construction. With its peer-reviewed format, the journal explores several fascinating editorials written by today''s experts in the field. Because new developments are being introduced each day, there''s much to be learned — examination, analysis creation, information retrieval, man–computer interactions, and more. The International Journal of Intelligent Systems uses charts and illustrations to demonstrate these ground-breaking issues, and encourages readers to share their thoughts and experiences.