{"title":"DDPG-AdaptConfig:异构联合学习中用于自适应设备选择和训练配置的深度强化学习框架","authors":"","doi":"10.1016/j.future.2024.107528","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning (FL) is a distributed machine learning approach that protects user privacy by collaboratively training shared models across devices without sharing their raw personal data. Despite its advantages, FL faces issues of increased convergence time and decreased accuracy due to the heterogeneity of data and systems across devices. Existing methods for solving these issues using reinforcement learning often ignore the adaptive configuration of local training hyperparameters to suit varying data characteristics and system resources. Moreover, they frequently overlook the heterogeneous information contained within local model parameters. To address these problems, we propose the DDPG-AdaptConfig framework based on Deep Deterministic Policy Gradient (DDPG) for adaptive device selection and local training hyperparameters configuration in FL to speed up convergence and ensure high model accuracy. Additionally, we develop a new actor network that integrates the transformer mechanism to extract heterogeneous information from model parameters, which assists in device selection and hyperparameters configuration. Furthermore, we introduce a clustering-based aggregation strategy to accommodate heterogeneity and prevent performance declines. Experimental results show that our DDPG-AdaptConfig achieves significant improvements over existing baselines.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DDPG-AdaptConfig: A deep reinforcement learning framework for adaptive device selection and training configuration in heterogeneity federated learning\",\"authors\":\"\",\"doi\":\"10.1016/j.future.2024.107528\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Federated Learning (FL) is a distributed machine learning approach that protects user privacy by collaboratively training shared models across devices without sharing their raw personal data. Despite its advantages, FL faces issues of increased convergence time and decreased accuracy due to the heterogeneity of data and systems across devices. Existing methods for solving these issues using reinforcement learning often ignore the adaptive configuration of local training hyperparameters to suit varying data characteristics and system resources. Moreover, they frequently overlook the heterogeneous information contained within local model parameters. To address these problems, we propose the DDPG-AdaptConfig framework based on Deep Deterministic Policy Gradient (DDPG) for adaptive device selection and local training hyperparameters configuration in FL to speed up convergence and ensure high model accuracy. Additionally, we develop a new actor network that integrates the transformer mechanism to extract heterogeneous information from model parameters, which assists in device selection and hyperparameters configuration. Furthermore, we introduce a clustering-based aggregation strategy to accommodate heterogeneity and prevent performance declines. Experimental results show that our DDPG-AdaptConfig achieves significant improvements over existing baselines.</div></div>\",\"PeriodicalId\":55132,\"journal\":{\"name\":\"Future Generation Computer Systems-The International Journal of Escience\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.2000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Future Generation Computer Systems-The International Journal of Escience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167739X24004928\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X24004928","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
DDPG-AdaptConfig: A deep reinforcement learning framework for adaptive device selection and training configuration in heterogeneity federated learning
Federated Learning (FL) is a distributed machine learning approach that protects user privacy by collaboratively training shared models across devices without sharing their raw personal data. Despite its advantages, FL faces issues of increased convergence time and decreased accuracy due to the heterogeneity of data and systems across devices. Existing methods for solving these issues using reinforcement learning often ignore the adaptive configuration of local training hyperparameters to suit varying data characteristics and system resources. Moreover, they frequently overlook the heterogeneous information contained within local model parameters. To address these problems, we propose the DDPG-AdaptConfig framework based on Deep Deterministic Policy Gradient (DDPG) for adaptive device selection and local training hyperparameters configuration in FL to speed up convergence and ensure high model accuracy. Additionally, we develop a new actor network that integrates the transformer mechanism to extract heterogeneous information from model parameters, which assists in device selection and hyperparameters configuration. Furthermore, we introduce a clustering-based aggregation strategy to accommodate heterogeneity and prevent performance declines. Experimental results show that our DDPG-AdaptConfig achieves significant improvements over existing baselines.
期刊介绍:
Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications.
Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration.
Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.