ERAT-DLoRA:在时间和深度感知动态 LoRA 中通过增强范围自适应进行参数高效调整

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
{"title":"ERAT-DLoRA:在时间和深度感知动态 LoRA 中通过增强范围自适应进行参数高效调整","authors":"","doi":"10.1016/j.neucom.2024.128778","DOIUrl":null,"url":null,"abstract":"<div><div>Despite their potential, the industrial deployment of large language models (LLMs) is constrained by traditional fine-tuning procedures that are both resource-intensive and time-consuming. Low-Rank Adaptation (LoRA) has emerged as a pioneering methodology for addressing these challenges. By integrating low-rank decomposition matrices into network weights to reduce trainable parameters, LoRA effectively accelerates the adaptation process. While research on LoRA primarily focuses on adjusting low-rank matrices, DyLoRA optimizes the rank-setting mechanism to avoid extensive effort in rank size training and searching. However, DyLoRA rank configuration mechanism has its own limitation. First, DyLoRA sets the same rank size for all the low-rank adaptation layers at each time step. Given that layers with different depth contain distinct information, they should have varying rank values to accurately capture their unique characteristics. Second, the truncated phase selected for ordering representation based on nested dropout regulation is only half dynamic, continuously dropping tail units, thereby limiting its ability to access information. In this work, we propose a novel technique, enhanced range adaptation in time and depth aware dynamic LoRA (ERAT-DLoRA) to address these problems. The ERAT-DLoRA method introduces a dynamic range to the truncated phase that makes the truncated phase fully dynamic. Additionally, we design a time and layer-aware dynamic rank to ensure appropriate adjustments at different time steps and layer levels. We evaluate our solution on natural languages understanding and language generation tasks. Extensive evaluation results demonstrate the effectiveness of the proposed method.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5000,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ERAT-DLoRA: Parameter-efficient tuning with enhanced range adaptation in time and depth aware dynamic LoRA\",\"authors\":\"\",\"doi\":\"10.1016/j.neucom.2024.128778\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Despite their potential, the industrial deployment of large language models (LLMs) is constrained by traditional fine-tuning procedures that are both resource-intensive and time-consuming. Low-Rank Adaptation (LoRA) has emerged as a pioneering methodology for addressing these challenges. By integrating low-rank decomposition matrices into network weights to reduce trainable parameters, LoRA effectively accelerates the adaptation process. While research on LoRA primarily focuses on adjusting low-rank matrices, DyLoRA optimizes the rank-setting mechanism to avoid extensive effort in rank size training and searching. However, DyLoRA rank configuration mechanism has its own limitation. First, DyLoRA sets the same rank size for all the low-rank adaptation layers at each time step. Given that layers with different depth contain distinct information, they should have varying rank values to accurately capture their unique characteristics. Second, the truncated phase selected for ordering representation based on nested dropout regulation is only half dynamic, continuously dropping tail units, thereby limiting its ability to access information. In this work, we propose a novel technique, enhanced range adaptation in time and depth aware dynamic LoRA (ERAT-DLoRA) to address these problems. The ERAT-DLoRA method introduces a dynamic range to the truncated phase that makes the truncated phase fully dynamic. Additionally, we design a time and layer-aware dynamic rank to ensure appropriate adjustments at different time steps and layer levels. We evaluate our solution on natural languages understanding and language generation tasks. Extensive evaluation results demonstrate the effectiveness of the proposed method.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2024-10-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231224015492\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224015492","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

尽管大型语言模型(LLMs)潜力巨大,但其工业化部署却受到传统微调程序的限制,这些程序既耗费资源又耗费时间。低阶适应(Low-Rank Adaptation,LoRA)已成为应对这些挑战的开创性方法。通过将低阶分解矩阵整合到网络权重中以减少可训练参数,LoRA 有效地加快了适应过程。有关 LoRA 的研究主要集中在调整低秩矩阵上,而 DyLoRA 则优化了秩设置机制,避免了秩大小训练和搜索的大量工作。然而,DyLoRA 的秩配置机制有其自身的局限性。首先,DyLoRA 在每个时间步为所有低秩适应层设置相同的秩大小。鉴于不同深度的层包含不同的信息,它们应该有不同的秩值,以准确捕捉其独特的特征。其次,根据嵌套丢弃调节为排序表示所选择的截断相位只有一半是动态的,会不断丢弃尾部单元,从而限制了其获取信息的能力。在这项工作中,我们提出了一种新技术--时间和深度感知动态 LoRA(ERAT-DLoRA)中的增强范围适应,以解决这些问题。ERAT-DLoRA 方法为截断阶段引入了动态范围,使截断阶段完全动态化。此外,我们还设计了一种时间和层感知动态等级,以确保在不同的时间步骤和层级进行适当的调整。我们在自然语言理解和语言生成任务中评估了我们的解决方案。广泛的评估结果证明了所提方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
ERAT-DLoRA: Parameter-efficient tuning with enhanced range adaptation in time and depth aware dynamic LoRA
Despite their potential, the industrial deployment of large language models (LLMs) is constrained by traditional fine-tuning procedures that are both resource-intensive and time-consuming. Low-Rank Adaptation (LoRA) has emerged as a pioneering methodology for addressing these challenges. By integrating low-rank decomposition matrices into network weights to reduce trainable parameters, LoRA effectively accelerates the adaptation process. While research on LoRA primarily focuses on adjusting low-rank matrices, DyLoRA optimizes the rank-setting mechanism to avoid extensive effort in rank size training and searching. However, DyLoRA rank configuration mechanism has its own limitation. First, DyLoRA sets the same rank size for all the low-rank adaptation layers at each time step. Given that layers with different depth contain distinct information, they should have varying rank values to accurately capture their unique characteristics. Second, the truncated phase selected for ordering representation based on nested dropout regulation is only half dynamic, continuously dropping tail units, thereby limiting its ability to access information. In this work, we propose a novel technique, enhanced range adaptation in time and depth aware dynamic LoRA (ERAT-DLoRA) to address these problems. The ERAT-DLoRA method introduces a dynamic range to the truncated phase that makes the truncated phase fully dynamic. Additionally, we design a time and layer-aware dynamic rank to ensure appropriate adjustments at different time steps and layer levels. We evaluate our solution on natural languages understanding and language generation tasks. Extensive evaluation results demonstrate the effectiveness of the proposed method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信