{"title":"Unstructured Pruning and Low Rank Factorisation of Self-Supervised Pre-Trained Speech Models","authors":"Haoyu Wang;Wei-Qiang Zhang","doi":"10.1109/JSTSP.2024.3433616","DOIUrl":null,"url":null,"abstract":"Self-supervised pre-trained speech models require significant memory and computational resources, limiting their applicability to many speech tasks. Unstructured pruning is a compression method that can achieve minimal performance degradation, while the resulting sparse matrix mandates special hardware or computational operators for acceleration. In this study, we propose a novel approach that leverages the potential low-rank structures of the unstructured sparse matrices by applying truncated singular value decomposition (SVD), thus converting them into parameter-efficient dense models. Moreover, we introduce nuclear norm regularisation to ensure lower rank and a learnable singular value selection strategy to determine the approximate truncation rate for each matrix. Experiments on multiple speech tasks demonstrate that the proposed method can convert an unstructured sparse model into a light-weight and hardware-friendly dense model with comparable or superior performance.","PeriodicalId":13038,"journal":{"name":"IEEE Journal of Selected Topics in Signal Processing","volume":"18 6","pages":"1046-1058"},"PeriodicalIF":8.7000,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Selected Topics in Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10609479/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Self-supervised pre-trained speech models require significant memory and computational resources, limiting their applicability to many speech tasks. Unstructured pruning is a compression method that can achieve minimal performance degradation, while the resulting sparse matrix mandates special hardware or computational operators for acceleration. In this study, we propose a novel approach that leverages the potential low-rank structures of the unstructured sparse matrices by applying truncated singular value decomposition (SVD), thus converting them into parameter-efficient dense models. Moreover, we introduce nuclear norm regularisation to ensure lower rank and a learnable singular value selection strategy to determine the approximate truncation rate for each matrix. Experiments on multiple speech tasks demonstrate that the proposed method can convert an unstructured sparse model into a light-weight and hardware-friendly dense model with comparable or superior performance.
期刊介绍:
The IEEE Journal of Selected Topics in Signal Processing (JSTSP) focuses on the Field of Interest of the IEEE Signal Processing Society, which encompasses the theory and application of various signal processing techniques. These techniques include filtering, coding, transmitting, estimating, detecting, analyzing, recognizing, synthesizing, recording, and reproducing signals using digital or analog devices. The term "signal" covers a wide range of data types, including audio, video, speech, image, communication, geophysical, sonar, radar, medical, musical, and others.
The journal format allows for in-depth exploration of signal processing topics, enabling the Society to cover both established and emerging areas. This includes interdisciplinary fields such as biomedical engineering and language processing, as well as areas not traditionally associated with engineering.