Mario Chahoud;Hani Sami;Azzam Mourad;Hadi Otrok;Jamal Bentahar;Mohsen Guizani
{"title":"利用深度强化学习在联盟学习中按需部署模型和客户端","authors":"Mario Chahoud;Hani Sami;Azzam Mourad;Hadi Otrok;Jamal Bentahar;Mohsen Guizani","doi":"10.1109/JIOT.2025.3561722","DOIUrl":null,"url":null,"abstract":"In federated learning (FL), the limited accessibility of data from diverse locations and user types poses a significant challenge due to restricted user participation. Expanding client access and diversifying data enhance models by incorporating diverse perspectives, thereby improving adaptability. However, in dynamic and mobile environments, the availability of FL clients fluctuates as devices may become inaccessible, leading to inefficient client selection and reduced model performance. Current solutions often fail to adapt quickly to these changes, creating a gap in achieving real-time client availability and efficient data utilization. To address this, we propose a deep reinforcement learning (DRL) on-demand solution, deploying new clients using Docker Containers on-the-fly. Our on-demand solution, employing DRL, targets client availability and selection while considering data shifts and container deployment complexities. It employs an autonomous end-to-end approach for handling model deployment and client selection. The DRL strategy leverages a Markov decision process (MDP) framework, with a Master Learner and a Joiner Learner to optimize decision-making. The designed cost functions account for the complexity of dynamic client deployment and selection, ensuring effective resource management and service reliability. Simulated tests show that our architecture can easily adapt to changes in the environment and respond to on-demand requests while reducing the number of learning rounds used by 20%–50% compared with existing approaches. This highlights its ability to improve client availability, capability, accuracy, and learning efficiency, surpassing heuristic and traditional reinforcement learning methods.","PeriodicalId":54347,"journal":{"name":"IEEE Internet of Things Journal","volume":"12 14","pages":"26685-26698"},"PeriodicalIF":8.9000,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On-Demand Model and Client Deployment in Federated Learning With Deep Reinforcement Learning\",\"authors\":\"Mario Chahoud;Hani Sami;Azzam Mourad;Hadi Otrok;Jamal Bentahar;Mohsen Guizani\",\"doi\":\"10.1109/JIOT.2025.3561722\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In federated learning (FL), the limited accessibility of data from diverse locations and user types poses a significant challenge due to restricted user participation. Expanding client access and diversifying data enhance models by incorporating diverse perspectives, thereby improving adaptability. However, in dynamic and mobile environments, the availability of FL clients fluctuates as devices may become inaccessible, leading to inefficient client selection and reduced model performance. Current solutions often fail to adapt quickly to these changes, creating a gap in achieving real-time client availability and efficient data utilization. To address this, we propose a deep reinforcement learning (DRL) on-demand solution, deploying new clients using Docker Containers on-the-fly. Our on-demand solution, employing DRL, targets client availability and selection while considering data shifts and container deployment complexities. It employs an autonomous end-to-end approach for handling model deployment and client selection. The DRL strategy leverages a Markov decision process (MDP) framework, with a Master Learner and a Joiner Learner to optimize decision-making. The designed cost functions account for the complexity of dynamic client deployment and selection, ensuring effective resource management and service reliability. Simulated tests show that our architecture can easily adapt to changes in the environment and respond to on-demand requests while reducing the number of learning rounds used by 20%–50% compared with existing approaches. This highlights its ability to improve client availability, capability, accuracy, and learning efficiency, surpassing heuristic and traditional reinforcement learning methods.\",\"PeriodicalId\":54347,\"journal\":{\"name\":\"IEEE Internet of Things Journal\",\"volume\":\"12 14\",\"pages\":\"26685-26698\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2025-04-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Internet of Things Journal\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10966421/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Internet of Things Journal","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10966421/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
On-Demand Model and Client Deployment in Federated Learning With Deep Reinforcement Learning
In federated learning (FL), the limited accessibility of data from diverse locations and user types poses a significant challenge due to restricted user participation. Expanding client access and diversifying data enhance models by incorporating diverse perspectives, thereby improving adaptability. However, in dynamic and mobile environments, the availability of FL clients fluctuates as devices may become inaccessible, leading to inefficient client selection and reduced model performance. Current solutions often fail to adapt quickly to these changes, creating a gap in achieving real-time client availability and efficient data utilization. To address this, we propose a deep reinforcement learning (DRL) on-demand solution, deploying new clients using Docker Containers on-the-fly. Our on-demand solution, employing DRL, targets client availability and selection while considering data shifts and container deployment complexities. It employs an autonomous end-to-end approach for handling model deployment and client selection. The DRL strategy leverages a Markov decision process (MDP) framework, with a Master Learner and a Joiner Learner to optimize decision-making. The designed cost functions account for the complexity of dynamic client deployment and selection, ensuring effective resource management and service reliability. Simulated tests show that our architecture can easily adapt to changes in the environment and respond to on-demand requests while reducing the number of learning rounds used by 20%–50% compared with existing approaches. This highlights its ability to improve client availability, capability, accuracy, and learning efficiency, surpassing heuristic and traditional reinforcement learning methods.
期刊介绍:
The EEE Internet of Things (IoT) Journal publishes articles and review articles covering various aspects of IoT, including IoT system architecture, IoT enabling technologies, IoT communication and networking protocols such as network coding, and IoT services and applications. Topics encompass IoT's impacts on sensor technologies, big data management, and future internet design for applications like smart cities and smart homes. Fields of interest include IoT architecture such as things-centric, data-centric, service-oriented IoT architecture; IoT enabling technologies and systematic integration such as sensor technologies, big sensor data management, and future Internet design for IoT; IoT services, applications, and test-beds such as IoT service middleware, IoT application programming interface (API), IoT application design, and IoT trials/experiments; IoT standardization activities and technology development in different standard development organizations (SDO) such as IEEE, IETF, ITU, 3GPP, ETSI, etc.