{"title":"Leveraging discriminative data: A pathway to high-performance, stable One-shot Network Pruning at Initialization","authors":"","doi":"10.1016/j.neucom.2024.128529","DOIUrl":null,"url":null,"abstract":"<div><p>One-shot Network Pruning at Initialization (OPaI) is acknowledged as a highly cost-effective strategy for network pruning. However, it has been observed that OPaI models tend to suffer from reduced accuracy stability as target sparsity increases. This study introduces a novel approach by incorporating Discriminative Data (DD) into OPaI, significantly improving performance at higher sparsity levels while maintaining the “one-shot” nature. Our approach achieves state-of-the-art (SOTA) performance, challenging the previously held belief of OPaI’s data independence. Through detailed ablation studies, we thoroughly investigate the influence of data on OPaI, particularly focusing on how DD addresses a common failure in OPaI known as “layer collapse”. Furthermore, our experiments demonstrate that leveraging DD from various pre-trained models can markedly boost pruning performance across different models without requiring changes to the existing model architectures or pruning methodologies. These significant improvements highlight our method’s high generalizability and stability, paving new paths for advancing pruning strategies. Our code is publicly available at: <span><span>https://github.com/Nonac/DDOPaI</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5000,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224013006","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
One-shot Network Pruning at Initialization (OPaI) is acknowledged as a highly cost-effective strategy for network pruning. However, it has been observed that OPaI models tend to suffer from reduced accuracy stability as target sparsity increases. This study introduces a novel approach by incorporating Discriminative Data (DD) into OPaI, significantly improving performance at higher sparsity levels while maintaining the “one-shot” nature. Our approach achieves state-of-the-art (SOTA) performance, challenging the previously held belief of OPaI’s data independence. Through detailed ablation studies, we thoroughly investigate the influence of data on OPaI, particularly focusing on how DD addresses a common failure in OPaI known as “layer collapse”. Furthermore, our experiments demonstrate that leveraging DD from various pre-trained models can markedly boost pruning performance across different models without requiring changes to the existing model architectures or pruning methodologies. These significant improvements highlight our method’s high generalizability and stability, paving new paths for advancing pruning strategies. Our code is publicly available at: https://github.com/Nonac/DDOPaI.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.