基于剪枝点训练的深度卷积神经网络核剪枝

Kwanghyun Koo, Hyun Kim
{"title":"基于剪枝点训练的深度卷积神经网络核剪枝","authors":"Kwanghyun Koo, Hyun Kim","doi":"10.1109/AICAS57966.2023.10168622","DOIUrl":null,"url":null,"abstract":"Pruning, which is a representative method for compressing huge convolutional neural network (CNN) models, has been mainly studied in two directions: weight pruning and filter pruning, with both approaches having clear limitations caused by their intrinsic characteristics. To solve this problem, research on kernel pruning, which has the advantages of both methods, has recently advanced. In this study, pruning point training-based kernel pruning (PPT-KP) is proposed to address the problems of existing kernel pruning methods. With PPT-KP, the L1 norm of the kernel converges to zero through an adaptive regularizer that applies L1 regularization of different intensities depending on the size of the L1 norm of the kernel to secure network sparsity and obtain multiple margin spaces for pruning. Thus, outstanding kernel pruning is possible because several pruning points can be created. PPT-KP outperformed several existing filter pruning and kernel pruning methods on various networks and datasets in terms of the trade-off between FLOPs reduction and accuracy drops. In particular, PPT-KP reduced parameters and FLOPs by 77.2% and 68.9%, respectively, in ResNet-56 on the CIFAR-10 dataset with only a 0.05% accuracy degradation.","PeriodicalId":296649,"journal":{"name":"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PPT-KP: Pruning Point Training-based Kernel Pruning for Deep Convolutional Neural Networks\",\"authors\":\"Kwanghyun Koo, Hyun Kim\",\"doi\":\"10.1109/AICAS57966.2023.10168622\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Pruning, which is a representative method for compressing huge convolutional neural network (CNN) models, has been mainly studied in two directions: weight pruning and filter pruning, with both approaches having clear limitations caused by their intrinsic characteristics. To solve this problem, research on kernel pruning, which has the advantages of both methods, has recently advanced. In this study, pruning point training-based kernel pruning (PPT-KP) is proposed to address the problems of existing kernel pruning methods. With PPT-KP, the L1 norm of the kernel converges to zero through an adaptive regularizer that applies L1 regularization of different intensities depending on the size of the L1 norm of the kernel to secure network sparsity and obtain multiple margin spaces for pruning. Thus, outstanding kernel pruning is possible because several pruning points can be created. PPT-KP outperformed several existing filter pruning and kernel pruning methods on various networks and datasets in terms of the trade-off between FLOPs reduction and accuracy drops. In particular, PPT-KP reduced parameters and FLOPs by 77.2% and 68.9%, respectively, in ResNet-56 on the CIFAR-10 dataset with only a 0.05% accuracy degradation.\",\"PeriodicalId\":296649,\"journal\":{\"name\":\"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AICAS57966.2023.10168622\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICAS57966.2023.10168622","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

剪枝是压缩巨大卷积神经网络(CNN)模型的一种代表性方法,目前主要从权值剪枝和滤波剪枝两个方向进行研究,这两种方法由于其固有的特性都有明显的局限性。为了解决这一问题,具有两种方法优点的核剪枝方法的研究近年来取得了进展。为了解决现有核修剪方法存在的问题,本文提出了基于剪枝点训练的核修剪方法(pt - kp)。在PPT-KP中,核的L1范数通过自适应正则化器收敛到零,该正则化器根据核的L1范数的大小应用不同强度的L1正则化来保证网络的稀疏性,并获得多个边缘空间进行剪枝。因此,出色的内核修剪是可能的,因为可以创建多个修剪点。在FLOPs减少和精度下降之间的权衡方面,PPT-KP在各种网络和数据集上优于几种现有的滤波器剪枝和核剪枝方法。特别是,在CIFAR-10数据集的ResNet-56中,PPT-KP的参数和FLOPs分别降低了77.2%和68.9%,精度仅下降了0.05%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
PPT-KP: Pruning Point Training-based Kernel Pruning for Deep Convolutional Neural Networks
Pruning, which is a representative method for compressing huge convolutional neural network (CNN) models, has been mainly studied in two directions: weight pruning and filter pruning, with both approaches having clear limitations caused by their intrinsic characteristics. To solve this problem, research on kernel pruning, which has the advantages of both methods, has recently advanced. In this study, pruning point training-based kernel pruning (PPT-KP) is proposed to address the problems of existing kernel pruning methods. With PPT-KP, the L1 norm of the kernel converges to zero through an adaptive regularizer that applies L1 regularization of different intensities depending on the size of the L1 norm of the kernel to secure network sparsity and obtain multiple margin spaces for pruning. Thus, outstanding kernel pruning is possible because several pruning points can be created. PPT-KP outperformed several existing filter pruning and kernel pruning methods on various networks and datasets in terms of the trade-off between FLOPs reduction and accuracy drops. In particular, PPT-KP reduced parameters and FLOPs by 77.2% and 68.9%, respectively, in ResNet-56 on the CIFAR-10 dataset with only a 0.05% accuracy degradation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信