{"title":"PruneAug:利用自动分层块剪枝技术在多种稀疏平台上弥合 DNN 修剪和推理延迟问题","authors":"Hanfei Geng;Yifei Liu;Yujie Zheng;Li Lyna Zhang;Jingwei Sun;Yujing Wang;Yang Wang;Guangzhong Sun;Mao Yang;Ting Cao;Yunxin Liu","doi":"10.1109/TC.2024.3441855","DOIUrl":null,"url":null,"abstract":"Although pruning is an effective technique to reduce the number of weights in deep neural networks (DNNs), it remains challenging for the resulting sparse networks to perform low-latency inference on everyday hardware. This problem is mainly caused by the incompatibility between the unstructured sparsity adopted for accuracy preservation and the sparse platform's (the combination of sparse kernel library and the underlying hardware) expectation of regular sparse patterns. In order to resolve this conflict, we propose PruneAug, an augmentation over existing unstructured pruning methods that finds block-sparse networks with much lower latency but preserves the accuracy. The fundamental idea of PruneAug is to prune the network with a layerwise block dimension assignment in a platform-aware fashion. Subject to an accuracy-loss constraint, PruneAug minimizes the latency of the block sparse network by jointly optimizing this layerwise block dimension assignment and the network's sparsity level. Admittedly, this approach expands the solution space. To curb our search cost, we include multiple optimizations while designing PruneAug's search space and strategy. Our evaluation over diverse pruning methods, DNNs, datasets, and sparse platforms shows that PruneAug enables different pruning methods to achieve speedup (as much as \n<inline-formula><tex-math>$\\boldsymbol{\\sim}13\\boldsymbol{\\times}$</tex-math></inline-formula>\n depending on the platform) while maintaining competitive accuracy relative to unstructured sparsity, extracting the full potential of sparse platforms.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 11","pages":"2576-2589"},"PeriodicalIF":3.6000,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PruneAug: Bridging DNN Pruning and Inference Latency on Diverse Sparse Platforms Using Automatic Layerwise Block Pruning\",\"authors\":\"Hanfei Geng;Yifei Liu;Yujie Zheng;Li Lyna Zhang;Jingwei Sun;Yujing Wang;Yang Wang;Guangzhong Sun;Mao Yang;Ting Cao;Yunxin Liu\",\"doi\":\"10.1109/TC.2024.3441855\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Although pruning is an effective technique to reduce the number of weights in deep neural networks (DNNs), it remains challenging for the resulting sparse networks to perform low-latency inference on everyday hardware. This problem is mainly caused by the incompatibility between the unstructured sparsity adopted for accuracy preservation and the sparse platform's (the combination of sparse kernel library and the underlying hardware) expectation of regular sparse patterns. In order to resolve this conflict, we propose PruneAug, an augmentation over existing unstructured pruning methods that finds block-sparse networks with much lower latency but preserves the accuracy. The fundamental idea of PruneAug is to prune the network with a layerwise block dimension assignment in a platform-aware fashion. Subject to an accuracy-loss constraint, PruneAug minimizes the latency of the block sparse network by jointly optimizing this layerwise block dimension assignment and the network's sparsity level. Admittedly, this approach expands the solution space. To curb our search cost, we include multiple optimizations while designing PruneAug's search space and strategy. Our evaluation over diverse pruning methods, DNNs, datasets, and sparse platforms shows that PruneAug enables different pruning methods to achieve speedup (as much as \\n<inline-formula><tex-math>$\\\\boldsymbol{\\\\sim}13\\\\boldsymbol{\\\\times}$</tex-math></inline-formula>\\n depending on the platform) while maintaining competitive accuracy relative to unstructured sparsity, extracting the full potential of sparse platforms.\",\"PeriodicalId\":13087,\"journal\":{\"name\":\"IEEE Transactions on Computers\",\"volume\":\"73 11\",\"pages\":\"2576-2589\"},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2024-08-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Computers\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10633894/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computers","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10633894/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
PruneAug: Bridging DNN Pruning and Inference Latency on Diverse Sparse Platforms Using Automatic Layerwise Block Pruning
Although pruning is an effective technique to reduce the number of weights in deep neural networks (DNNs), it remains challenging for the resulting sparse networks to perform low-latency inference on everyday hardware. This problem is mainly caused by the incompatibility between the unstructured sparsity adopted for accuracy preservation and the sparse platform's (the combination of sparse kernel library and the underlying hardware) expectation of regular sparse patterns. In order to resolve this conflict, we propose PruneAug, an augmentation over existing unstructured pruning methods that finds block-sparse networks with much lower latency but preserves the accuracy. The fundamental idea of PruneAug is to prune the network with a layerwise block dimension assignment in a platform-aware fashion. Subject to an accuracy-loss constraint, PruneAug minimizes the latency of the block sparse network by jointly optimizing this layerwise block dimension assignment and the network's sparsity level. Admittedly, this approach expands the solution space. To curb our search cost, we include multiple optimizations while designing PruneAug's search space and strategy. Our evaluation over diverse pruning methods, DNNs, datasets, and sparse platforms shows that PruneAug enables different pruning methods to achieve speedup (as much as
$\boldsymbol{\sim}13\boldsymbol{\times}$
depending on the platform) while maintaining competitive accuracy relative to unstructured sparsity, extracting the full potential of sparse platforms.
期刊介绍:
The IEEE Transactions on Computers is a monthly publication with a wide distribution to researchers, developers, technical managers, and educators in the computer field. It publishes papers on research in areas of current interest to the readers. These areas include, but are not limited to, the following: a) computer organizations and architectures; b) operating systems, software systems, and communication protocols; c) real-time systems and embedded systems; d) digital devices, computer components, and interconnection networks; e) specification, design, prototyping, and testing methods and tools; f) performance, fault tolerance, reliability, security, and testability; g) case studies and experimental and theoretical evaluations; and h) new and important applications and trends.