{"title":"Personalized lightweight federated learning for efficient and private model training in heterogeneous data environments","authors":"Ying Wang","doi":"10.1016/j.sasc.2025.200212","DOIUrl":null,"url":null,"abstract":"<div><div>Personalized federated learning (PFL) enables collaborative model training across devices while adapting to heterogeneous data, but faces resource constraints on edge devices. Combining PFL with pruning techniques helps address these constraints. A challenge is that one-size-fits-all pruning strategies may ignore the varying importance of parameters for local data. To overcome this, we propose PLFL, a novel personalized lightweight federated learning framework. PLFL uses a hypernetwork at the server level to deliver personalized local models to clients and incorporates a federated pruning mechanism tailored to parameter importance, ensuring optimal performance and maintaining personalization. Experimental results show that PLFL achieves higher accuracy with lower computational costs and fewer parameters compared to state-of-the-art methods on heterogeneous datasets.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"7 ","pages":"Article 200212"},"PeriodicalIF":3.6000,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Systems and Soft Computing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772941925000304","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Personalized federated learning (PFL) enables collaborative model training across devices while adapting to heterogeneous data, but faces resource constraints on edge devices. Combining PFL with pruning techniques helps address these constraints. A challenge is that one-size-fits-all pruning strategies may ignore the varying importance of parameters for local data. To overcome this, we propose PLFL, a novel personalized lightweight federated learning framework. PLFL uses a hypernetwork at the server level to deliver personalized local models to clients and incorporates a federated pruning mechanism tailored to parameter importance, ensuring optimal performance and maintaining personalization. Experimental results show that PLFL achieves higher accuracy with lower computational costs and fewer parameters compared to state-of-the-art methods on heterogeneous datasets.