Mulei Ma, Liantao Wu, Wenxiang Liu, Nanxi Chen, Ziyu Shao, Yang Yang
{"title":"基于任务卸载的数据感知分层联邦学习","authors":"Mulei Ma, Liantao Wu, Wenxiang Liu, Nanxi Chen, Ziyu Shao, Yang Yang","doi":"10.1109/GLOBECOM48099.2022.10000924","DOIUrl":null,"url":null,"abstract":"To cope with the high communication overhead caused by frequent aggregation of Federated Learning (FL) in Multi-access Edge Computing (MEC) scenarios, Hierarchical Federated Edge Learning (HFEL) is proposed as an evolving framework. HFEL offloads tasks to edge servers for partial model aggregation to reduce network traffic. However, most of the existing research focuses on resource optimization for HFEL without considering the impact of data characteristics and cannot guarantee the quality of FL training. To this end, we propose a task offloading approach based on data and resource heterogeneity under HFEL to improve training performance and reduce system cost. Specifically, we leverage information entropy to incorporate data statistical features into the cost function to reshape edge datasets. In addition, we applied Multi-Agent Deep Deterministic Policy Gradient (MADDPG) with a resource allocation module to generate distributed offloading policy more efficiently. Our algorithm not only adopts local observations to obtain the optimal action but also takes into account device heterogeneity, which can adapt to the unstable edge environment. Extensive experiments under multiple datasets and baselines are carried out, which demonstrate that our algorithm can effectively improve the accuracy of aggregated models while reducing system cost.","PeriodicalId":313199,"journal":{"name":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","volume":"67 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Data-aware Hierarchical Federated Learning via Task Offloading\",\"authors\":\"Mulei Ma, Liantao Wu, Wenxiang Liu, Nanxi Chen, Ziyu Shao, Yang Yang\",\"doi\":\"10.1109/GLOBECOM48099.2022.10000924\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To cope with the high communication overhead caused by frequent aggregation of Federated Learning (FL) in Multi-access Edge Computing (MEC) scenarios, Hierarchical Federated Edge Learning (HFEL) is proposed as an evolving framework. HFEL offloads tasks to edge servers for partial model aggregation to reduce network traffic. However, most of the existing research focuses on resource optimization for HFEL without considering the impact of data characteristics and cannot guarantee the quality of FL training. To this end, we propose a task offloading approach based on data and resource heterogeneity under HFEL to improve training performance and reduce system cost. Specifically, we leverage information entropy to incorporate data statistical features into the cost function to reshape edge datasets. In addition, we applied Multi-Agent Deep Deterministic Policy Gradient (MADDPG) with a resource allocation module to generate distributed offloading policy more efficiently. Our algorithm not only adopts local observations to obtain the optimal action but also takes into account device heterogeneity, which can adapt to the unstable edge environment. Extensive experiments under multiple datasets and baselines are carried out, which demonstrate that our algorithm can effectively improve the accuracy of aggregated models while reducing system cost.\",\"PeriodicalId\":313199,\"journal\":{\"name\":\"GLOBECOM 2022 - 2022 IEEE Global Communications Conference\",\"volume\":\"67 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"GLOBECOM 2022 - 2022 IEEE Global Communications Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GLOBECOM48099.2022.10000924\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GLOBECOM48099.2022.10000924","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Data-aware Hierarchical Federated Learning via Task Offloading
To cope with the high communication overhead caused by frequent aggregation of Federated Learning (FL) in Multi-access Edge Computing (MEC) scenarios, Hierarchical Federated Edge Learning (HFEL) is proposed as an evolving framework. HFEL offloads tasks to edge servers for partial model aggregation to reduce network traffic. However, most of the existing research focuses on resource optimization for HFEL without considering the impact of data characteristics and cannot guarantee the quality of FL training. To this end, we propose a task offloading approach based on data and resource heterogeneity under HFEL to improve training performance and reduce system cost. Specifically, we leverage information entropy to incorporate data statistical features into the cost function to reshape edge datasets. In addition, we applied Multi-Agent Deep Deterministic Policy Gradient (MADDPG) with a resource allocation module to generate distributed offloading policy more efficiently. Our algorithm not only adopts local observations to obtain the optimal action but also takes into account device heterogeneity, which can adapt to the unstable edge environment. Extensive experiments under multiple datasets and baselines are carried out, which demonstrate that our algorithm can effectively improve the accuracy of aggregated models while reducing system cost.