Can Hu , Jie Yang , Shien Song , Wentao Fan , Tao Xie
{"title":"小波变换:用于参数有效微调的离散小波变换","authors":"Can Hu , Jie Yang , Shien Song , Wentao Fan , Tao Xie","doi":"10.1016/j.neucom.2025.130765","DOIUrl":null,"url":null,"abstract":"<div><div>Recently, Low-rank adaptation (LoRA) has achieved significant popularity for fine-tuning foundational models owing to its ability to substantially reduce the number of trainable parameters and avoid additional inference costs. This reduction is achieved by introducing low-rank matrices A and B to represent weight update,defined as <span><math><mrow><mi>Δ</mi><mi>W</mi><mo>=</mo><mi>AB</mi></mrow></math></span>. Nonetheless, the accuracy gap frequently remains between LoRA and full fine-tuning (FT). Additionally, LoRA encounters difficulties concerning storage, particularly when extensive customization adaptations or larger base models are involved. In this work, we aim to resemble the learning capacity of FT and further reduce trainable parameters by leveraging the robust expressiveness of the wavelet transform (WT). We present a novel approach, named WaveletFT, which treats <span><math><mrow><mi>Δ</mi><mi>W</mi></mrow></math></span> as a matrix within the spatial domain and focuses on learning only a small subset of its coefficients. By employing the trained spectral coefficients, we utilize the inverse discrete WT to reconstruct <span><math><mrow><mi>Δ</mi><mi>W</mi></mrow></math></span>. Experimental results demonstrate that the proposed WaveletFT method offers comparable or superior performance with fewer parameters compared to LoRA across diverse tasks, such as natural language understanding, natural language generation, instruction tuning, image classification and text-to-image generation.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"649 ","pages":"Article 130765"},"PeriodicalIF":5.5000,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"WaveletFT: Discrete wavelet transform for parameter-efficient fine-tuning\",\"authors\":\"Can Hu , Jie Yang , Shien Song , Wentao Fan , Tao Xie\",\"doi\":\"10.1016/j.neucom.2025.130765\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Recently, Low-rank adaptation (LoRA) has achieved significant popularity for fine-tuning foundational models owing to its ability to substantially reduce the number of trainable parameters and avoid additional inference costs. This reduction is achieved by introducing low-rank matrices A and B to represent weight update,defined as <span><math><mrow><mi>Δ</mi><mi>W</mi><mo>=</mo><mi>AB</mi></mrow></math></span>. Nonetheless, the accuracy gap frequently remains between LoRA and full fine-tuning (FT). Additionally, LoRA encounters difficulties concerning storage, particularly when extensive customization adaptations or larger base models are involved. In this work, we aim to resemble the learning capacity of FT and further reduce trainable parameters by leveraging the robust expressiveness of the wavelet transform (WT). We present a novel approach, named WaveletFT, which treats <span><math><mrow><mi>Δ</mi><mi>W</mi></mrow></math></span> as a matrix within the spatial domain and focuses on learning only a small subset of its coefficients. By employing the trained spectral coefficients, we utilize the inverse discrete WT to reconstruct <span><math><mrow><mi>Δ</mi><mi>W</mi></mrow></math></span>. Experimental results demonstrate that the proposed WaveletFT method offers comparable or superior performance with fewer parameters compared to LoRA across diverse tasks, such as natural language understanding, natural language generation, instruction tuning, image classification and text-to-image generation.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"649 \",\"pages\":\"Article 130765\"},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2025-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231225014377\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225014377","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
WaveletFT: Discrete wavelet transform for parameter-efficient fine-tuning
Recently, Low-rank adaptation (LoRA) has achieved significant popularity for fine-tuning foundational models owing to its ability to substantially reduce the number of trainable parameters and avoid additional inference costs. This reduction is achieved by introducing low-rank matrices A and B to represent weight update,defined as . Nonetheless, the accuracy gap frequently remains between LoRA and full fine-tuning (FT). Additionally, LoRA encounters difficulties concerning storage, particularly when extensive customization adaptations or larger base models are involved. In this work, we aim to resemble the learning capacity of FT and further reduce trainable parameters by leveraging the robust expressiveness of the wavelet transform (WT). We present a novel approach, named WaveletFT, which treats as a matrix within the spatial domain and focuses on learning only a small subset of its coefficients. By employing the trained spectral coefficients, we utilize the inverse discrete WT to reconstruct . Experimental results demonstrate that the proposed WaveletFT method offers comparable or superior performance with fewer parameters compared to LoRA across diverse tasks, such as natural language understanding, natural language generation, instruction tuning, image classification and text-to-image generation.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.