Kuldeep Sharma, N. Ramakrishnan, Alok Prakash, S. Lam, T. Srikanthan
{"title":"Evaluating the Merits of Ranking in Structured Network Pruning","authors":"Kuldeep Sharma, N. Ramakrishnan, Alok Prakash, S. Lam, T. Srikanthan","doi":"10.1109/ICDCS47774.2020.00183","DOIUrl":null,"url":null,"abstract":"Pruning of channels in trained deep neural networks has been widely used to implement efficient DNNs that can be deployed on embedded/mobile devices. Majority of existing techniques employ criteria-based sorting of the channels to preserve salient channels during pruning as well as to automatically determine the pruned network architecture. However, recent studies on widely used DNNs, such as VGG-16, have shown that selecting and preserving salient channels using pruning criteria is not necessary since the plasticity of the network allows the accuracy to be recovered through fine-tuning. In this work, we further explore the value of the ranking criteria in pruning to show that if channels are removed gradually and iteratively, alternating with fine-tuning on the target dataset, ranking criteria are indeed not necessary to select redundant channels. Experimental results confirm that even a random selection of channels for pruning leads to similar performance (accuracy). In addition, we demonstrate that even a simple pruning technique that uniformly removes channels from all layers in the network, performs similar to existing ranking criteria-based approaches, while leading to lower inference time (GFLOPs). Our extensive evaluations include the context of embedded implementations of DNNs - specifically, on small networks such as SqueezeNet and at aggressive pruning percentages. We leverage these insights, to propose a GFLOPs-aware iterative pruning strategy that does not rely on any ranking criteria and yet can further lead to lower inference time by 15% without sacrificing accuracy.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDCS47774.2020.00183","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Pruning of channels in trained deep neural networks has been widely used to implement efficient DNNs that can be deployed on embedded/mobile devices. Majority of existing techniques employ criteria-based sorting of the channels to preserve salient channels during pruning as well as to automatically determine the pruned network architecture. However, recent studies on widely used DNNs, such as VGG-16, have shown that selecting and preserving salient channels using pruning criteria is not necessary since the plasticity of the network allows the accuracy to be recovered through fine-tuning. In this work, we further explore the value of the ranking criteria in pruning to show that if channels are removed gradually and iteratively, alternating with fine-tuning on the target dataset, ranking criteria are indeed not necessary to select redundant channels. Experimental results confirm that even a random selection of channels for pruning leads to similar performance (accuracy). In addition, we demonstrate that even a simple pruning technique that uniformly removes channels from all layers in the network, performs similar to existing ranking criteria-based approaches, while leading to lower inference time (GFLOPs). Our extensive evaluations include the context of embedded implementations of DNNs - specifically, on small networks such as SqueezeNet and at aggressive pruning percentages. We leverage these insights, to propose a GFLOPs-aware iterative pruning strategy that does not rely on any ranking criteria and yet can further lead to lower inference time by 15% without sacrificing accuracy.