DDPG-AdaptConfig: A deep reinforcement learning framework for adaptive device selection and training configuration in heterogeneity federated learning

IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS
{"title":"DDPG-AdaptConfig: A deep reinforcement learning framework for adaptive device selection and training configuration in heterogeneity federated learning","authors":"","doi":"10.1016/j.future.2024.107528","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning (FL) is a distributed machine learning approach that protects user privacy by collaboratively training shared models across devices without sharing their raw personal data. Despite its advantages, FL faces issues of increased convergence time and decreased accuracy due to the heterogeneity of data and systems across devices. Existing methods for solving these issues using reinforcement learning often ignore the adaptive configuration of local training hyperparameters to suit varying data characteristics and system resources. Moreover, they frequently overlook the heterogeneous information contained within local model parameters. To address these problems, we propose the DDPG-AdaptConfig framework based on Deep Deterministic Policy Gradient (DDPG) for adaptive device selection and local training hyperparameters configuration in FL to speed up convergence and ensure high model accuracy. Additionally, we develop a new actor network that integrates the transformer mechanism to extract heterogeneous information from model parameters, which assists in device selection and hyperparameters configuration. Furthermore, we introduce a clustering-based aggregation strategy to accommodate heterogeneity and prevent performance declines. Experimental results show that our DDPG-AdaptConfig achieves significant improvements over existing baselines.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X24004928","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL) is a distributed machine learning approach that protects user privacy by collaboratively training shared models across devices without sharing their raw personal data. Despite its advantages, FL faces issues of increased convergence time and decreased accuracy due to the heterogeneity of data and systems across devices. Existing methods for solving these issues using reinforcement learning often ignore the adaptive configuration of local training hyperparameters to suit varying data characteristics and system resources. Moreover, they frequently overlook the heterogeneous information contained within local model parameters. To address these problems, we propose the DDPG-AdaptConfig framework based on Deep Deterministic Policy Gradient (DDPG) for adaptive device selection and local training hyperparameters configuration in FL to speed up convergence and ensure high model accuracy. Additionally, we develop a new actor network that integrates the transformer mechanism to extract heterogeneous information from model parameters, which assists in device selection and hyperparameters configuration. Furthermore, we introduce a clustering-based aggregation strategy to accommodate heterogeneity and prevent performance declines. Experimental results show that our DDPG-AdaptConfig achieves significant improvements over existing baselines.
DDPG-AdaptConfig:异构联合学习中用于自适应设备选择和训练配置的深度强化学习框架
联合学习(FL)是一种分布式机器学习方法,它通过跨设备协作训练共享模型来保护用户隐私,而无需共享用户的原始个人数据。尽管联合学习具有诸多优势,但由于各设备间数据和系统的异构性,联合学习面临着收敛时间增加和准确性降低的问题。利用强化学习解决这些问题的现有方法往往忽视了本地训练超参数的自适应配置,以适应不同的数据特征和系统资源。此外,它们还经常忽略本地模型参数中包含的异构信息。为了解决这些问题,我们提出了基于深度确定性策略梯度(DDPG)的 DDPG-AdaptConfig 框架,用于自适应设备选择和 FL 中的局部训练超参数配置,以加快收敛速度并确保高模型精度。此外,我们还开发了一种新的角色网络,该网络集成了转换器机制,可从模型参数中提取异构信息,从而协助设备选择和超参数配置。此外,我们还引入了基于聚类的聚合策略,以适应异质性并防止性能下降。实验结果表明,与现有基线相比,我们的 DDPG-AdaptConfig 实现了显著改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
19.90
自引率
2.70%
发文量
376
审稿时长
10.6 months
期刊介绍: Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications. Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration. Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信