{"title":"PHiFL-TL:使用迁移学习的个性化分层联邦学习","authors":"Afsaneh Afzali, Pirooz Shamsinejadbabaki","doi":"10.1016/j.future.2024.107672","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning is a collaborative machine learning (ML) framework designed to train a globally shared model without accessing participants’ private data. However, due to the statistical heterogeneity in the participants’ data, federated learning faces significant challenges. This approach generates a similar output for all participants, without adapting the model to each individual. Consequently, the global model performs poorly on each participant's task. To mitigate these issues, personalized federated learning methods aim to reduce the negative effects caused by data heterogeneity. Previous personalized approaches have relied on a single central server. However, in federated learning based on a client-server architecture, the central server's workload becomes a bottleneck. In our paper, we propose a Personalized Hierarchical Federated Learning approach (PHiFL-TL). First, PHiFL-TL trains a global shared model using hierarchical federated learning. Next, it constructs relatively personalized models through transfer learning. We demonstrate the effectiveness of PHiFL-TL on non-identical and independent data partitions from MNIST and FEMNIST datasets.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107672"},"PeriodicalIF":6.2000,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PHiFL-TL: Personalized hierarchical federated learning using transfer learning\",\"authors\":\"Afsaneh Afzali, Pirooz Shamsinejadbabaki\",\"doi\":\"10.1016/j.future.2024.107672\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Federated Learning is a collaborative machine learning (ML) framework designed to train a globally shared model without accessing participants’ private data. However, due to the statistical heterogeneity in the participants’ data, federated learning faces significant challenges. This approach generates a similar output for all participants, without adapting the model to each individual. Consequently, the global model performs poorly on each participant's task. To mitigate these issues, personalized federated learning methods aim to reduce the negative effects caused by data heterogeneity. Previous personalized approaches have relied on a single central server. However, in federated learning based on a client-server architecture, the central server's workload becomes a bottleneck. In our paper, we propose a Personalized Hierarchical Federated Learning approach (PHiFL-TL). First, PHiFL-TL trains a global shared model using hierarchical federated learning. Next, it constructs relatively personalized models through transfer learning. We demonstrate the effectiveness of PHiFL-TL on non-identical and independent data partitions from MNIST and FEMNIST datasets.</div></div>\",\"PeriodicalId\":55132,\"journal\":{\"name\":\"Future Generation Computer Systems-The International Journal of Escience\",\"volume\":\"166 \",\"pages\":\"Article 107672\"},\"PeriodicalIF\":6.2000,\"publicationDate\":\"2024-12-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Future Generation Computer Systems-The International Journal of Escience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167739X24006368\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X24006368","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
PHiFL-TL: Personalized hierarchical federated learning using transfer learning
Federated Learning is a collaborative machine learning (ML) framework designed to train a globally shared model without accessing participants’ private data. However, due to the statistical heterogeneity in the participants’ data, federated learning faces significant challenges. This approach generates a similar output for all participants, without adapting the model to each individual. Consequently, the global model performs poorly on each participant's task. To mitigate these issues, personalized federated learning methods aim to reduce the negative effects caused by data heterogeneity. Previous personalized approaches have relied on a single central server. However, in federated learning based on a client-server architecture, the central server's workload becomes a bottleneck. In our paper, we propose a Personalized Hierarchical Federated Learning approach (PHiFL-TL). First, PHiFL-TL trains a global shared model using hierarchical federated learning. Next, it constructs relatively personalized models through transfer learning. We demonstrate the effectiveness of PHiFL-TL on non-identical and independent data partitions from MNIST and FEMNIST datasets.
期刊介绍:
Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications.
Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration.
Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.