{"title":"Federated Fine-Tuning on Heterogeneous LoRAs With Error-Compensated Aggregation.","authors":"Wanyi Ning,Jingyu Wang,Qi Qi,Haifeng Sun,Daixuan Cheng,Cong Liu,Lei Zhang,Zirui Zhuang,Jianxin Liao","doi":"10.1109/tnnls.2025.3586545","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) has recently been applied to the parameter-efficient fine-tuning (PEFT) of large language models (LLMs). While promising, client resource heterogeneity has imposed the challenge of the \"bucket effect\" to FL, where model configuration must cater to the client with the fewest resources. To tackle this issue, heterogeneous low-rank adaptation (LoRA) has recently emerged in FL, which enables clients to do local fine-tuning with different LoRA ranks. However, existing works in this area typically adopt zero-padding, stacking, or singular value decomposition (SVD) for LoRA aggregation, which often incur precision loss or significant overhead, limiting their practicality. In this article, we propose ECLoRA, a novel method for federated fine-tuning with heterogeneous LoRA settings across clients. ECLoRA employs randomized SVD (RSVD) to dramatically reduce aggregation overhead while introducing an error compensation (EC) mechanism that incorporates the decomposition error from previous rounds to improve aggregation precision. Extensive experiments on four widely used foundation models across six public tasks demonstrate the effectiveness of ECLoRA. Specifically, ECLoRA is: (1) accurate, significantly improving the final model performance; (2) fast, accelerating convergence with an average speedup of $1.54\\times $ to $3.01\\times $ ; and (3) practical, reducing aggregation time by approximately $40\\times $ compared to classical SVD.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"109 1","pages":""},"PeriodicalIF":10.2000,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tnnls.2025.3586545","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning (FL) has recently been applied to the parameter-efficient fine-tuning (PEFT) of large language models (LLMs). While promising, client resource heterogeneity has imposed the challenge of the "bucket effect" to FL, where model configuration must cater to the client with the fewest resources. To tackle this issue, heterogeneous low-rank adaptation (LoRA) has recently emerged in FL, which enables clients to do local fine-tuning with different LoRA ranks. However, existing works in this area typically adopt zero-padding, stacking, or singular value decomposition (SVD) for LoRA aggregation, which often incur precision loss or significant overhead, limiting their practicality. In this article, we propose ECLoRA, a novel method for federated fine-tuning with heterogeneous LoRA settings across clients. ECLoRA employs randomized SVD (RSVD) to dramatically reduce aggregation overhead while introducing an error compensation (EC) mechanism that incorporates the decomposition error from previous rounds to improve aggregation precision. Extensive experiments on four widely used foundation models across six public tasks demonstrate the effectiveness of ECLoRA. Specifically, ECLoRA is: (1) accurate, significantly improving the final model performance; (2) fast, accelerating convergence with an average speedup of $1.54\times $ to $3.01\times $ ; and (3) practical, reducing aggregation time by approximately $40\times $ compared to classical SVD.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.