{"title":"A Green Multi-Attribute Client Selection for Over-The-Air Federated Learning: A Grey-Wolf-Optimizer Approach","authors":"Maryam Ben Driss, Essaid Sabir, Halima Elbiaze, Abdoulaye Baniré Diallo, Mohamed Sadik","doi":"arxiv-2409.11442","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) has gained attention across various industries for\nits capability to train machine learning models without centralizing sensitive\ndata. While this approach offers significant benefits such as privacy\npreservation and decreased communication overhead, it presents several\nchallenges, including deployment complexity and interoperability issues,\nparticularly in heterogeneous scenarios or resource-constrained environments.\nOver-the-air (OTA) FL was introduced to tackle these challenges by\ndisseminating model updates without necessitating direct device-to-device\nconnections or centralized servers. However, OTA-FL brought forth limitations\nassociated with heightened energy consumption and network latency. In this\npaper, we propose a multi-attribute client selection framework employing the\ngrey wolf optimizer (GWO) to strategically control the number of participants\nin each round and optimize the OTA-FL process while considering accuracy,\nenergy, delay, reliability, and fairness constraints of participating devices.\nWe evaluate the performance of our multi-attribute client selection approach in\nterms of model loss minimization, convergence time reduction, and energy\nefficiency. In our experimental evaluation, we assessed and compared the\nperformance of our approach against the existing state-of-the-art methods. Our\nresults demonstrate that the proposed GWO-based client selection outperforms\nthese baselines across various metrics. Specifically, our approach achieves a\nnotable reduction in model loss, accelerates convergence time, and enhances\nenergy efficiency while maintaining high fairness and reliability indicators.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"591 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11442","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Federated Learning (FL) has gained attention across various industries for
its capability to train machine learning models without centralizing sensitive
data. While this approach offers significant benefits such as privacy
preservation and decreased communication overhead, it presents several
challenges, including deployment complexity and interoperability issues,
particularly in heterogeneous scenarios or resource-constrained environments.
Over-the-air (OTA) FL was introduced to tackle these challenges by
disseminating model updates without necessitating direct device-to-device
connections or centralized servers. However, OTA-FL brought forth limitations
associated with heightened energy consumption and network latency. In this
paper, we propose a multi-attribute client selection framework employing the
grey wolf optimizer (GWO) to strategically control the number of participants
in each round and optimize the OTA-FL process while considering accuracy,
energy, delay, reliability, and fairness constraints of participating devices.
We evaluate the performance of our multi-attribute client selection approach in
terms of model loss minimization, convergence time reduction, and energy
efficiency. In our experimental evaluation, we assessed and compared the
performance of our approach against the existing state-of-the-art methods. Our
results demonstrate that the proposed GWO-based client selection outperforms
these baselines across various metrics. Specifically, our approach achieves a
notable reduction in model loss, accelerates convergence time, and enhances
energy efficiency while maintaining high fairness and reliability indicators.