Adaptive device sampling and deadline determination for cloud-based heterogeneous federated learning

IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Deyu Zhang, Wang Sun, Zi-Ang Zheng, Wenxin Chen, Shiwen He
{"title":"Adaptive device sampling and deadline determination for cloud-based heterogeneous federated learning","authors":"Deyu Zhang, Wang Sun, Zi-Ang Zheng, Wenxin Chen, Shiwen He","doi":"10.1186/s13677-023-00515-6","DOIUrl":null,"url":null,"abstract":"Abstract As a new approach to machine learning, Federated learning enables distributned traiing on edge devices and aggregates local models into a global model. The edge devices that participate in federated learning are highly heterogeneous in terms of computing power, device state, and data distribution, making it challenging to converge models efficiently. In this paper, we propose FedState, which is an adaptive device sampling and deadline determination technique for cloud-based heterogeneous federated learning. Specifically, we consider the cloud as a central server that orchestrates federated learning on a large pool of edge devices. To improve the efficiency of model convergence in heterogeneous federated learning, our approach adaptively samples devices to join each round of training and determines the deadline for result submission based on device state. We analyze existing device usage traces to build device state models in different scenarios and design a dynamic importance measurement mechanism based on device availability, data utility, and computing power. We also propose a deadline determination module that dynamically sets the deadline according to the availability of all sampled devices, local training time, and communication time, enabling more clients to submit local models more efficiently. Due to the variability of device state, we design an experience-driven algorithm based on Deep Reinforcement Learning (DRL) that can dynamically adjust our sampling and deadline policies according to the current environment state. We demonstrate the effectiveness of our approach through a series of experiments with the FMNIST dataset and show that our method outperforms current state-of-the-art approaches in terms of model accuracy and convergence speed.","PeriodicalId":56007,"journal":{"name":"Journal of Cloud Computing-Advances Systems and Applications","volume":"48 5","pages":"0"},"PeriodicalIF":3.7000,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Cloud Computing-Advances Systems and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s13677-023-00515-6","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract As a new approach to machine learning, Federated learning enables distributned traiing on edge devices and aggregates local models into a global model. The edge devices that participate in federated learning are highly heterogeneous in terms of computing power, device state, and data distribution, making it challenging to converge models efficiently. In this paper, we propose FedState, which is an adaptive device sampling and deadline determination technique for cloud-based heterogeneous federated learning. Specifically, we consider the cloud as a central server that orchestrates federated learning on a large pool of edge devices. To improve the efficiency of model convergence in heterogeneous federated learning, our approach adaptively samples devices to join each round of training and determines the deadline for result submission based on device state. We analyze existing device usage traces to build device state models in different scenarios and design a dynamic importance measurement mechanism based on device availability, data utility, and computing power. We also propose a deadline determination module that dynamically sets the deadline according to the availability of all sampled devices, local training time, and communication time, enabling more clients to submit local models more efficiently. Due to the variability of device state, we design an experience-driven algorithm based on Deep Reinforcement Learning (DRL) that can dynamically adjust our sampling and deadline policies according to the current environment state. We demonstrate the effectiveness of our approach through a series of experiments with the FMNIST dataset and show that our method outperforms current state-of-the-art approaches in terms of model accuracy and convergence speed.
基于云的异构联邦学习的自适应设备采样和截止日期确定
联邦学习作为一种新的机器学习方法,能够在边缘设备上进行分布式训练,并将局部模型聚合为全局模型。参与联邦学习的边缘设备在计算能力、设备状态和数据分布方面是高度异构的,这使得有效地收敛模型具有挑战性。本文提出了一种基于云异构联邦学习的自适应设备采样和截止日期确定技术——联邦状态。具体来说,我们认为云是一个中央服务器,它在大量边缘设备上协调联合学习。为了提高异构联邦学习中模型收敛的效率,我们的方法自适应采样设备以加入每一轮训练,并根据设备状态确定提交结果的截止日期。我们分析了现有设备的使用轨迹,建立了不同场景下的设备状态模型,并设计了基于设备可用性、数据效用和计算能力的动态重要性测量机制。我们还提出了截止日期确定模块,该模块根据所有采样设备的可用性、本地训练时间和通信时间动态设置截止日期,使更多的客户端能够更有效地提交本地模型。由于设备状态的可变性,我们设计了一种基于深度强化学习(DRL)的经验驱动算法,该算法可以根据当前环境状态动态调整采样和截止日期策略。我们通过FMNIST数据集的一系列实验证明了我们方法的有效性,并表明我们的方法在模型精度和收敛速度方面优于当前最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Cloud Computing-Advances Systems and Applications
Journal of Cloud Computing-Advances Systems and Applications Computer Science-Computer Networks and Communications
CiteScore
6.80
自引率
7.50%
发文量
76
审稿时长
75 days
期刊介绍: The Journal of Cloud Computing: Advances, Systems and Applications (JoCCASA) will publish research articles on all aspects of Cloud Computing. Principally, articles will address topics that are core to Cloud Computing, focusing on the Cloud applications, the Cloud systems, and the advances that will lead to the Clouds of the future. Comprehensive review and survey articles that offer up new insights, and lay the foundations for further exploratory and experimental work, are also relevant.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信