核集在快速神经网络瘦身通道剪枝中的应用

Wenfeng Yin, Gang Dong, Yaqian Zhao, Rengang Li
{"title":"核集在快速神经网络瘦身通道剪枝中的应用","authors":"Wenfeng Yin, Gang Dong, Yaqian Zhao, Rengang Li","doi":"10.1109/IJCNN52387.2021.9533343","DOIUrl":null,"url":null,"abstract":"Pruning reduces neural networks' parameters and accelerates inferences, enabling deep learning in resource-limited scenarios. Existing saliency-based pruning methods apply characteristics of feature maps or weights to judge the importance of neurons or structures, where weights' characteristics based methods are data-independent and robust for future input data. This paper proposes a coreset based pruning method for the data-independent structured compression, aiming to improve the construction efficiency of pruning. The first step of our method is to prune channels, according to the channel coreset merged from multi-rounds coresets constructions. Our method adjusts the importance function utilized in the random probability sampling during coresets construction procedures to achieve data-independent channel selections. The second step is recovering the precision of compressed networks through solving the compressed weights reconstruction by linear least squares. Our method is also generalized to implementations on multi-branch networks such as SqueezeNet and MobileNet-v2. In tests on classification networks like ResNet, it is observed that our method performs fast and achieves an accuracy decline as small as 0.99% when multiple layers are pruned without finetuning. As shown in evaluations on object detection networks, our method acquires the least decline in mAP indicator compared to comparison schemes, due to the advantage of data-independent channel selections of our method in preserving precision.","PeriodicalId":396583,"journal":{"name":"2021 International Joint Conference on Neural Networks (IJCNN)","volume":"146 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Coresets Application in Channel Pruning for Fast Neural Network Slimming\",\"authors\":\"Wenfeng Yin, Gang Dong, Yaqian Zhao, Rengang Li\",\"doi\":\"10.1109/IJCNN52387.2021.9533343\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Pruning reduces neural networks' parameters and accelerates inferences, enabling deep learning in resource-limited scenarios. Existing saliency-based pruning methods apply characteristics of feature maps or weights to judge the importance of neurons or structures, where weights' characteristics based methods are data-independent and robust for future input data. This paper proposes a coreset based pruning method for the data-independent structured compression, aiming to improve the construction efficiency of pruning. The first step of our method is to prune channels, according to the channel coreset merged from multi-rounds coresets constructions. Our method adjusts the importance function utilized in the random probability sampling during coresets construction procedures to achieve data-independent channel selections. The second step is recovering the precision of compressed networks through solving the compressed weights reconstruction by linear least squares. Our method is also generalized to implementations on multi-branch networks such as SqueezeNet and MobileNet-v2. In tests on classification networks like ResNet, it is observed that our method performs fast and achieves an accuracy decline as small as 0.99% when multiple layers are pruned without finetuning. As shown in evaluations on object detection networks, our method acquires the least decline in mAP indicator compared to comparison schemes, due to the advantage of data-independent channel selections of our method in preserving precision.\",\"PeriodicalId\":396583,\"journal\":{\"name\":\"2021 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"146 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN52387.2021.9533343\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN52387.2021.9533343","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

修剪减少了神经网络的参数,加速了推理,在资源有限的情况下实现了深度学习。现有的基于显著性的剪枝方法利用特征映射或权值的特征来判断神经元或结构的重要性,其中基于权值特征的方法与数据无关,对未来的输入数据具有鲁棒性。针对数据无关的结构化压缩,提出了一种基于核集的剪枝方法,旨在提高剪枝的构造效率。该方法的第一步是根据多轮核集构造合并的信道核集对信道进行剪枝。我们的方法在核心集构建过程中调整随机概率抽样中使用的重要函数,以实现与数据无关的信道选择。第二步是通过线性最小二乘法求解压缩权重重构,恢复压缩网络的精度。我们的方法也可以推广到多分支网络(如SqueezeNet和MobileNet-v2)上的实现。在ResNet等分类网络上的测试中,我们的方法执行速度很快,在不进行微调的情况下对多层进行修剪,准确率下降幅度小至0.99%。在对目标检测网络的评价中可以看出,我们的方法相对于比较方案获得了最小的mAP指标下降,这是由于我们的方法在保持精度方面具有与数据无关的通道选择的优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Coresets Application in Channel Pruning for Fast Neural Network Slimming
Pruning reduces neural networks' parameters and accelerates inferences, enabling deep learning in resource-limited scenarios. Existing saliency-based pruning methods apply characteristics of feature maps or weights to judge the importance of neurons or structures, where weights' characteristics based methods are data-independent and robust for future input data. This paper proposes a coreset based pruning method for the data-independent structured compression, aiming to improve the construction efficiency of pruning. The first step of our method is to prune channels, according to the channel coreset merged from multi-rounds coresets constructions. Our method adjusts the importance function utilized in the random probability sampling during coresets construction procedures to achieve data-independent channel selections. The second step is recovering the precision of compressed networks through solving the compressed weights reconstruction by linear least squares. Our method is also generalized to implementations on multi-branch networks such as SqueezeNet and MobileNet-v2. In tests on classification networks like ResNet, it is observed that our method performs fast and achieves an accuracy decline as small as 0.99% when multiple layers are pruned without finetuning. As shown in evaluations on object detection networks, our method acquires the least decline in mAP indicator compared to comparison schemes, due to the advantage of data-independent channel selections of our method in preserving precision.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信