利用深度强化学习在联盟学习中按需部署模型和客户端

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Mario Chahoud;Hani Sami;Azzam Mourad;Hadi Otrok;Jamal Bentahar;Mohsen Guizani
{"title":"利用深度强化学习在联盟学习中按需部署模型和客户端","authors":"Mario Chahoud;Hani Sami;Azzam Mourad;Hadi Otrok;Jamal Bentahar;Mohsen Guizani","doi":"10.1109/JIOT.2025.3561722","DOIUrl":null,"url":null,"abstract":"In federated learning (FL), the limited accessibility of data from diverse locations and user types poses a significant challenge due to restricted user participation. Expanding client access and diversifying data enhance models by incorporating diverse perspectives, thereby improving adaptability. However, in dynamic and mobile environments, the availability of FL clients fluctuates as devices may become inaccessible, leading to inefficient client selection and reduced model performance. Current solutions often fail to adapt quickly to these changes, creating a gap in achieving real-time client availability and efficient data utilization. To address this, we propose a deep reinforcement learning (DRL) on-demand solution, deploying new clients using Docker Containers on-the-fly. Our on-demand solution, employing DRL, targets client availability and selection while considering data shifts and container deployment complexities. It employs an autonomous end-to-end approach for handling model deployment and client selection. The DRL strategy leverages a Markov decision process (MDP) framework, with a Master Learner and a Joiner Learner to optimize decision-making. The designed cost functions account for the complexity of dynamic client deployment and selection, ensuring effective resource management and service reliability. Simulated tests show that our architecture can easily adapt to changes in the environment and respond to on-demand requests while reducing the number of learning rounds used by 20%–50% compared with existing approaches. This highlights its ability to improve client availability, capability, accuracy, and learning efficiency, surpassing heuristic and traditional reinforcement learning methods.","PeriodicalId":54347,"journal":{"name":"IEEE Internet of Things Journal","volume":"12 14","pages":"26685-26698"},"PeriodicalIF":8.9000,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On-Demand Model and Client Deployment in Federated Learning With Deep Reinforcement Learning\",\"authors\":\"Mario Chahoud;Hani Sami;Azzam Mourad;Hadi Otrok;Jamal Bentahar;Mohsen Guizani\",\"doi\":\"10.1109/JIOT.2025.3561722\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In federated learning (FL), the limited accessibility of data from diverse locations and user types poses a significant challenge due to restricted user participation. Expanding client access and diversifying data enhance models by incorporating diverse perspectives, thereby improving adaptability. However, in dynamic and mobile environments, the availability of FL clients fluctuates as devices may become inaccessible, leading to inefficient client selection and reduced model performance. Current solutions often fail to adapt quickly to these changes, creating a gap in achieving real-time client availability and efficient data utilization. To address this, we propose a deep reinforcement learning (DRL) on-demand solution, deploying new clients using Docker Containers on-the-fly. Our on-demand solution, employing DRL, targets client availability and selection while considering data shifts and container deployment complexities. It employs an autonomous end-to-end approach for handling model deployment and client selection. The DRL strategy leverages a Markov decision process (MDP) framework, with a Master Learner and a Joiner Learner to optimize decision-making. The designed cost functions account for the complexity of dynamic client deployment and selection, ensuring effective resource management and service reliability. Simulated tests show that our architecture can easily adapt to changes in the environment and respond to on-demand requests while reducing the number of learning rounds used by 20%–50% compared with existing approaches. This highlights its ability to improve client availability, capability, accuracy, and learning efficiency, surpassing heuristic and traditional reinforcement learning methods.\",\"PeriodicalId\":54347,\"journal\":{\"name\":\"IEEE Internet of Things Journal\",\"volume\":\"12 14\",\"pages\":\"26685-26698\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2025-04-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Internet of Things Journal\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10966421/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Internet of Things Journal","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10966421/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

在联邦学习(FL)中,由于用户参与受限,来自不同位置和用户类型的数据的有限可访问性构成了重大挑战。扩大客户访问和多样化数据通过纳入不同的视角来增强模型,从而提高适应性。然而,在动态和移动环境中,由于设备可能无法访问,FL客户端的可用性会波动,从而导致客户端选择效率低下并降低模型性能。当前的解决方案往往不能快速适应这些变化,从而在实现实时客户端可用性和有效的数据利用方面造成了差距。为了解决这个问题,我们提出了一种深度强化学习(DRL)按需解决方案,即使用Docker容器动态部署新客户端。我们的按需解决方案采用DRL,在考虑数据转移和容器部署复杂性的同时,以客户端可用性和选择为目标。它采用自主的端到端方法来处理模型部署和客户端选择。DRL策略利用马尔可夫决策过程(MDP)框架,使用主学习者和Joiner学习者来优化决策。设计的成本函数考虑了客户端动态部署和选择的复杂性,保证了资源的有效管理和服务的可靠性。模拟测试表明,我们的架构可以很容易地适应环境的变化并响应按需请求,同时与现有方法相比,使用的学习轮数减少了20%-50%。这突出了它提高客户可用性、能力、准确性和学习效率的能力,超越了启发式和传统的强化学习方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
On-Demand Model and Client Deployment in Federated Learning With Deep Reinforcement Learning
In federated learning (FL), the limited accessibility of data from diverse locations and user types poses a significant challenge due to restricted user participation. Expanding client access and diversifying data enhance models by incorporating diverse perspectives, thereby improving adaptability. However, in dynamic and mobile environments, the availability of FL clients fluctuates as devices may become inaccessible, leading to inefficient client selection and reduced model performance. Current solutions often fail to adapt quickly to these changes, creating a gap in achieving real-time client availability and efficient data utilization. To address this, we propose a deep reinforcement learning (DRL) on-demand solution, deploying new clients using Docker Containers on-the-fly. Our on-demand solution, employing DRL, targets client availability and selection while considering data shifts and container deployment complexities. It employs an autonomous end-to-end approach for handling model deployment and client selection. The DRL strategy leverages a Markov decision process (MDP) framework, with a Master Learner and a Joiner Learner to optimize decision-making. The designed cost functions account for the complexity of dynamic client deployment and selection, ensuring effective resource management and service reliability. Simulated tests show that our architecture can easily adapt to changes in the environment and respond to on-demand requests while reducing the number of learning rounds used by 20%–50% compared with existing approaches. This highlights its ability to improve client availability, capability, accuracy, and learning efficiency, surpassing heuristic and traditional reinforcement learning methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Internet of Things Journal
IEEE Internet of Things Journal Computer Science-Information Systems
CiteScore
17.60
自引率
13.20%
发文量
1982
期刊介绍: The EEE Internet of Things (IoT) Journal publishes articles and review articles covering various aspects of IoT, including IoT system architecture, IoT enabling technologies, IoT communication and networking protocols such as network coding, and IoT services and applications. Topics encompass IoT's impacts on sensor technologies, big data management, and future internet design for applications like smart cities and smart homes. Fields of interest include IoT architecture such as things-centric, data-centric, service-oriented IoT architecture; IoT enabling technologies and systematic integration such as sensor technologies, big sensor data management, and future Internet design for IoT; IoT services, applications, and test-beds such as IoT service middleware, IoT application programming interface (API), IoT application design, and IoT trials/experiments; IoT standardization activities and technology development in different standard development organizations (SDO) such as IEEE, IETF, ITU, 3GPP, ETSI, etc.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信