{"title":"基于连接灵敏度的过滤器剪枝","authors":"Yinong Xu, Yunsen Liao, Ying Zhao","doi":"10.1145/3415048.3416114","DOIUrl":null,"url":null,"abstract":"For the goal of reducing the remarkable redundancy in deep convolutional neural networks (CNNs), we propose an efficient framework to compress and accelerate CNN models. This work focus on pruning at filter level, mainly removing those less important filters. Firstly, we measure the importance of the filter by introducing a saliency criterion based on its corresponding connection sensitivity. In addition, we apply an algorithm, which transform a vanilla CNN module, to provide a quantitative ranking. Next, we prune the redundancy by discarding unimportant filters. Finally, we fine-tune the network to improve its accuracy. We verify the effectiveness of our method with VGGNet and ResNet on multiple datasets, such as CIFAR-10 and ImageNet ILSVRC-12. For instance, we achieve more than 50% FLOPs reduction on ResNet-56 with virtually the same accuracy as the reference network.","PeriodicalId":122511,"journal":{"name":"Proceedings of the 2020 International Conference on Pattern Recognition and Intelligent Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Filter Pruning Based on Connection Sensitivity\",\"authors\":\"Yinong Xu, Yunsen Liao, Ying Zhao\",\"doi\":\"10.1145/3415048.3416114\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"For the goal of reducing the remarkable redundancy in deep convolutional neural networks (CNNs), we propose an efficient framework to compress and accelerate CNN models. This work focus on pruning at filter level, mainly removing those less important filters. Firstly, we measure the importance of the filter by introducing a saliency criterion based on its corresponding connection sensitivity. In addition, we apply an algorithm, which transform a vanilla CNN module, to provide a quantitative ranking. Next, we prune the redundancy by discarding unimportant filters. Finally, we fine-tune the network to improve its accuracy. We verify the effectiveness of our method with VGGNet and ResNet on multiple datasets, such as CIFAR-10 and ImageNet ILSVRC-12. For instance, we achieve more than 50% FLOPs reduction on ResNet-56 with virtually the same accuracy as the reference network.\",\"PeriodicalId\":122511,\"journal\":{\"name\":\"Proceedings of the 2020 International Conference on Pattern Recognition and Intelligent Systems\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-07-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2020 International Conference on Pattern Recognition and Intelligent Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3415048.3416114\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 International Conference on Pattern Recognition and Intelligent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3415048.3416114","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
For the goal of reducing the remarkable redundancy in deep convolutional neural networks (CNNs), we propose an efficient framework to compress and accelerate CNN models. This work focus on pruning at filter level, mainly removing those less important filters. Firstly, we measure the importance of the filter by introducing a saliency criterion based on its corresponding connection sensitivity. In addition, we apply an algorithm, which transform a vanilla CNN module, to provide a quantitative ranking. Next, we prune the redundancy by discarding unimportant filters. Finally, we fine-tune the network to improve its accuracy. We verify the effectiveness of our method with VGGNet and ResNet on multiple datasets, such as CIFAR-10 and ImageNet ILSVRC-12. For instance, we achieve more than 50% FLOPs reduction on ResNet-56 with virtually the same accuracy as the reference network.