{"title":"有效的深度卷积模型压缩与主动逐步修剪方法","authors":"Sheng-sheng Wang, Chunshang Xing, Dong Liu","doi":"10.1504/ijcse.2020.10031600","DOIUrl":null,"url":null,"abstract":"Deep models are structurally tremendous and complex, thus making it hard to deploy on the embedded hardware with restricted memory and computing power. Although, the existing compression methods have pruned the deep models effectively, some issues exist in those methods, such as multiple iterations needed in fine-tuning phase, difficulty in pruning granularity control and numerous hyperparameters needed to set. In this paper, we propose an active stepwise pruning method of a logarithmic function which only needs to set three hyperparameters and a few epochs. We also propose a recovery strategy to repair the incorrect pruning thus ensuring the prediction accuracy of model. Pruning and repairing alternately constitute cyclic process along with updating the weights in layers. Our method can prune the parameters of MobileNet, AlexNet, VGG-16 and ZFNet by a factor of 5.6×, 11.7×, 16.6× and 15× respectively without any accuracy loss, which precedes the existing methods.","PeriodicalId":340410,"journal":{"name":"Int. J. Comput. Sci. Eng.","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Efficient deep convolutional model compression with an active stepwise pruning approach\",\"authors\":\"Sheng-sheng Wang, Chunshang Xing, Dong Liu\",\"doi\":\"10.1504/ijcse.2020.10031600\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep models are structurally tremendous and complex, thus making it hard to deploy on the embedded hardware with restricted memory and computing power. Although, the existing compression methods have pruned the deep models effectively, some issues exist in those methods, such as multiple iterations needed in fine-tuning phase, difficulty in pruning granularity control and numerous hyperparameters needed to set. In this paper, we propose an active stepwise pruning method of a logarithmic function which only needs to set three hyperparameters and a few epochs. We also propose a recovery strategy to repair the incorrect pruning thus ensuring the prediction accuracy of model. Pruning and repairing alternately constitute cyclic process along with updating the weights in layers. Our method can prune the parameters of MobileNet, AlexNet, VGG-16 and ZFNet by a factor of 5.6×, 11.7×, 16.6× and 15× respectively without any accuracy loss, which precedes the existing methods.\",\"PeriodicalId\":340410,\"journal\":{\"name\":\"Int. J. Comput. Sci. Eng.\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Int. J. Comput. Sci. Eng.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1504/ijcse.2020.10031600\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Comput. Sci. Eng.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1504/ijcse.2020.10031600","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Efficient deep convolutional model compression with an active stepwise pruning approach
Deep models are structurally tremendous and complex, thus making it hard to deploy on the embedded hardware with restricted memory and computing power. Although, the existing compression methods have pruned the deep models effectively, some issues exist in those methods, such as multiple iterations needed in fine-tuning phase, difficulty in pruning granularity control and numerous hyperparameters needed to set. In this paper, we propose an active stepwise pruning method of a logarithmic function which only needs to set three hyperparameters and a few epochs. We also propose a recovery strategy to repair the incorrect pruning thus ensuring the prediction accuracy of model. Pruning and repairing alternately constitute cyclic process along with updating the weights in layers. Our method can prune the parameters of MobileNet, AlexNet, VGG-16 and ZFNet by a factor of 5.6×, 11.7×, 16.6× and 15× respectively without any accuracy loss, which precedes the existing methods.