Qixin Xie, Chao Li, Boyu Diao, Zhulin An, Yongjun Xu
{"title":"基于L0正则化的细粒度神经网络剪枝方法","authors":"Qixin Xie, Chao Li, Boyu Diao, Zhulin An, Yongjun Xu","doi":"10.1109/ECAI46879.2019.9041962","DOIUrl":null,"url":null,"abstract":"Deep neural networks have made remarkable achievements in many tasks. However, we are not able to deploy such successful but heavy models on mobile devices directly due to the limited power and computing capacity. Thus, an obvious solution to tackle this problem is to compress neural network by pruning useless weights in neural networks. The point is how to remove these redundancies while maintain the performance of neural networks. In this work, we propose a novel neural network pruning method: guiding the weights of a neural network to be sparse by introducing LO regularization during the training stage, which can effectively resist the damage on the performance while pruning as well as dramatically reduce the time overhead of retraining stage. Experiment results on MNIST with LeNet and CIFAR-10 with VGG-16 demonstrate the effectiveness of this method to the classic method.","PeriodicalId":285780,"journal":{"name":"2019 11th International Conference on Electronics, Computers and Artificial Intelligence (ECAI)","volume":"135 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"L0 Regularization based Fine-grained Neural Network Pruning Method\",\"authors\":\"Qixin Xie, Chao Li, Boyu Diao, Zhulin An, Yongjun Xu\",\"doi\":\"10.1109/ECAI46879.2019.9041962\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep neural networks have made remarkable achievements in many tasks. However, we are not able to deploy such successful but heavy models on mobile devices directly due to the limited power and computing capacity. Thus, an obvious solution to tackle this problem is to compress neural network by pruning useless weights in neural networks. The point is how to remove these redundancies while maintain the performance of neural networks. In this work, we propose a novel neural network pruning method: guiding the weights of a neural network to be sparse by introducing LO regularization during the training stage, which can effectively resist the damage on the performance while pruning as well as dramatically reduce the time overhead of retraining stage. Experiment results on MNIST with LeNet and CIFAR-10 with VGG-16 demonstrate the effectiveness of this method to the classic method.\",\"PeriodicalId\":285780,\"journal\":{\"name\":\"2019 11th International Conference on Electronics, Computers and Artificial Intelligence (ECAI)\",\"volume\":\"135 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 11th International Conference on Electronics, Computers and Artificial Intelligence (ECAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ECAI46879.2019.9041962\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 11th International Conference on Electronics, Computers and Artificial Intelligence (ECAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ECAI46879.2019.9041962","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
L0 Regularization based Fine-grained Neural Network Pruning Method
Deep neural networks have made remarkable achievements in many tasks. However, we are not able to deploy such successful but heavy models on mobile devices directly due to the limited power and computing capacity. Thus, an obvious solution to tackle this problem is to compress neural network by pruning useless weights in neural networks. The point is how to remove these redundancies while maintain the performance of neural networks. In this work, we propose a novel neural network pruning method: guiding the weights of a neural network to be sparse by introducing LO regularization during the training stage, which can effectively resist the damage on the performance while pruning as well as dramatically reduce the time overhead of retraining stage. Experiment results on MNIST with LeNet and CIFAR-10 with VGG-16 demonstrate the effectiveness of this method to the classic method.