Jiancheng Gu , Jiabin Yuan , Jiyuan Cai , Xianfa Zhou , Lili Fan
{"title":"La-LoRA: Parameter-efficient fine-tuning with layer-wise adaptive low-rank adaptation","authors":"Jiancheng Gu , Jiabin Yuan , Jiyuan Cai , Xianfa Zhou , Lili Fan","doi":"10.1016/j.neunet.2025.108095","DOIUrl":null,"url":null,"abstract":"<div><div>Parameter-efficient fine-tuning (PEFT) has emerged as a critical paradigm for adapting large pre-trained models to downstream tasks, offering a balance between computational efficiency and model performance. Among these methods, Low-Rank Adaptation (LoRA) has gained significant popularity due to its efficiency; it freezes the pre-trained weights and decomposes the incremental matrices into two trainable low-rank matrices. However, a critical limitation of LoRA lies in its uniform rank assignment across all layers, which fails to account for the heterogeneous importance of different layers in contributing to task performance, potentially resulting in suboptimal adaptation. To address this limitation, we propose Layer-wise Adaptive Low-Rank Adaptation (La-LoRA), a novel approach that dynamically allocates rank to each layer based on Dynamic Contribution-Driven Parameter Budget (DCDPB) and Truncated Norm Weighted Dynamic Rank Allocation (TNW-DRA) during training. By treating each layer as an independent unit and progressively adjusting its rank allocation, La-LoRA ensures optimal model performance while maintaining computational efficiency and adapting to the complexity of diverse tasks. We conducted extensive experiments across multiple tasks and models to evaluate the effectiveness of La-LoRA. The results demonstrate that La-LoRA consistently outperforms existing benchmarks, validating its effectiveness in diverse scenarios.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108095"},"PeriodicalIF":6.3000,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S089360802500975X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Parameter-efficient fine-tuning (PEFT) has emerged as a critical paradigm for adapting large pre-trained models to downstream tasks, offering a balance between computational efficiency and model performance. Among these methods, Low-Rank Adaptation (LoRA) has gained significant popularity due to its efficiency; it freezes the pre-trained weights and decomposes the incremental matrices into two trainable low-rank matrices. However, a critical limitation of LoRA lies in its uniform rank assignment across all layers, which fails to account for the heterogeneous importance of different layers in contributing to task performance, potentially resulting in suboptimal adaptation. To address this limitation, we propose Layer-wise Adaptive Low-Rank Adaptation (La-LoRA), a novel approach that dynamically allocates rank to each layer based on Dynamic Contribution-Driven Parameter Budget (DCDPB) and Truncated Norm Weighted Dynamic Rank Allocation (TNW-DRA) during training. By treating each layer as an independent unit and progressively adjusting its rank allocation, La-LoRA ensures optimal model performance while maintaining computational efficiency and adapting to the complexity of diverse tasks. We conducted extensive experiments across multiple tasks and models to evaluate the effectiveness of La-LoRA. The results demonstrate that La-LoRA consistently outperforms existing benchmarks, validating its effectiveness in diverse scenarios.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.