{"title":"HSFL:基于层级组织的高效分离联邦学习框架","authors":"Tengxi Xia, Yongheng Deng, Sheng Yue, Junyi He, Ju Ren, Yaoxue Zhang","doi":"10.23919/CNSM55787.2022.9964646","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) has emerged as a popular paradigm for distributed machine learning among vast clients. Unfortunately, resource-constrained clients often fail to participate in FL because they cannot pay for the memory resources required for model training due to their limited memory or bandwidth. Split federated learning (SFL) is a novel FL framework in which clients commit intermediate results of model training to a cloud server for client-server collaborative training of models, making resource-constrained clients also eligible for FL. However, existing SFL frameworks mostly require frequent communication with the cloud server to exchange intermediate results and model parameters, which results in significant communication overhead and elongated training time. In particular, this can be exacerbated by the imbalanced data distributions of clients. To tackle this issue, we propose HSFL, a hierarchical split federated learning framework that efficiently trains SFL model through hierarchical organization participants. Under the HSFL framework, we formulate a Cloud Aggregation Time Minimization (CATM) problem to minimize the global training time and design a light-weight client assignment algorithm based on dynamic programming to solve it. Moreover, we develop a self-adaption approach to cope with the dynamic computational resources of clients. Finally, we implement and evaluate HSFL on various real-world training tasks, elaborating on its effectiveness and superiority in terms of efficiency and accuracy compared to baselines.","PeriodicalId":232521,"journal":{"name":"2022 18th International Conference on Network and Service Management (CNSM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"HSFL: An Efficient Split Federated Learning Framework via Hierarchical Organization\",\"authors\":\"Tengxi Xia, Yongheng Deng, Sheng Yue, Junyi He, Ju Ren, Yaoxue Zhang\",\"doi\":\"10.23919/CNSM55787.2022.9964646\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning (FL) has emerged as a popular paradigm for distributed machine learning among vast clients. Unfortunately, resource-constrained clients often fail to participate in FL because they cannot pay for the memory resources required for model training due to their limited memory or bandwidth. Split federated learning (SFL) is a novel FL framework in which clients commit intermediate results of model training to a cloud server for client-server collaborative training of models, making resource-constrained clients also eligible for FL. However, existing SFL frameworks mostly require frequent communication with the cloud server to exchange intermediate results and model parameters, which results in significant communication overhead and elongated training time. In particular, this can be exacerbated by the imbalanced data distributions of clients. To tackle this issue, we propose HSFL, a hierarchical split federated learning framework that efficiently trains SFL model through hierarchical organization participants. Under the HSFL framework, we formulate a Cloud Aggregation Time Minimization (CATM) problem to minimize the global training time and design a light-weight client assignment algorithm based on dynamic programming to solve it. Moreover, we develop a self-adaption approach to cope with the dynamic computational resources of clients. Finally, we implement and evaluate HSFL on various real-world training tasks, elaborating on its effectiveness and superiority in terms of efficiency and accuracy compared to baselines.\",\"PeriodicalId\":232521,\"journal\":{\"name\":\"2022 18th International Conference on Network and Service Management (CNSM)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 18th International Conference on Network and Service Management (CNSM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/CNSM55787.2022.9964646\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 18th International Conference on Network and Service Management (CNSM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/CNSM55787.2022.9964646","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
摘要
联邦学习(FL)已经成为分布式机器学习的一个流行范例。不幸的是,资源受限的客户端经常无法参与FL,因为由于有限的内存或带宽,他们无法支付模型训练所需的内存资源。拆分联邦学习(Split federated learning, SFL)是一种新颖的模型学习框架,客户端将模型训练的中间结果提交给云服务器进行客户端-服务器协同训练模型,使得资源有限的客户端也可以进行模型学习。然而,现有的SFL框架大多需要与云服务器频繁通信以交换中间结果和模型参数,这导致通信开销大,训练时间长。特别是,客户机数据分布的不平衡可能会加剧这种情况。为了解决这个问题,我们提出了HSFL,一个分层分裂联邦学习框架,通过分层组织参与者有效地训练SFL模型。在HSFL框架下,提出了最小化全局训练时间的CATM (Cloud Aggregation Time Minimization)问题,并设计了基于动态规划的轻量级客户端分配算法来解决该问题。此外,我们还开发了一种自适应方法来处理客户端动态计算资源。最后,我们在各种现实世界的训练任务中实施和评估HSFL,阐述了与基线相比,HSFL在效率和准确性方面的有效性和优越性。
HSFL: An Efficient Split Federated Learning Framework via Hierarchical Organization
Federated learning (FL) has emerged as a popular paradigm for distributed machine learning among vast clients. Unfortunately, resource-constrained clients often fail to participate in FL because they cannot pay for the memory resources required for model training due to their limited memory or bandwidth. Split federated learning (SFL) is a novel FL framework in which clients commit intermediate results of model training to a cloud server for client-server collaborative training of models, making resource-constrained clients also eligible for FL. However, existing SFL frameworks mostly require frequent communication with the cloud server to exchange intermediate results and model parameters, which results in significant communication overhead and elongated training time. In particular, this can be exacerbated by the imbalanced data distributions of clients. To tackle this issue, we propose HSFL, a hierarchical split federated learning framework that efficiently trains SFL model through hierarchical organization participants. Under the HSFL framework, we formulate a Cloud Aggregation Time Minimization (CATM) problem to minimize the global training time and design a light-weight client assignment algorithm based on dynamic programming to solve it. Moreover, we develop a self-adaption approach to cope with the dynamic computational resources of clients. Finally, we implement and evaluate HSFL on various real-world training tasks, elaborating on its effectiveness and superiority in terms of efficiency and accuracy compared to baselines.