{"title":"PPT-KP: Pruning Point Training-based Kernel Pruning for Deep Convolutional Neural Networks","authors":"Kwanghyun Koo, Hyun Kim","doi":"10.1109/AICAS57966.2023.10168622","DOIUrl":null,"url":null,"abstract":"Pruning, which is a representative method for compressing huge convolutional neural network (CNN) models, has been mainly studied in two directions: weight pruning and filter pruning, with both approaches having clear limitations caused by their intrinsic characteristics. To solve this problem, research on kernel pruning, which has the advantages of both methods, has recently advanced. In this study, pruning point training-based kernel pruning (PPT-KP) is proposed to address the problems of existing kernel pruning methods. With PPT-KP, the L1 norm of the kernel converges to zero through an adaptive regularizer that applies L1 regularization of different intensities depending on the size of the L1 norm of the kernel to secure network sparsity and obtain multiple margin spaces for pruning. Thus, outstanding kernel pruning is possible because several pruning points can be created. PPT-KP outperformed several existing filter pruning and kernel pruning methods on various networks and datasets in terms of the trade-off between FLOPs reduction and accuracy drops. In particular, PPT-KP reduced parameters and FLOPs by 77.2% and 68.9%, respectively, in ResNet-56 on the CIFAR-10 dataset with only a 0.05% accuracy degradation.","PeriodicalId":296649,"journal":{"name":"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICAS57966.2023.10168622","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Pruning, which is a representative method for compressing huge convolutional neural network (CNN) models, has been mainly studied in two directions: weight pruning and filter pruning, with both approaches having clear limitations caused by their intrinsic characteristics. To solve this problem, research on kernel pruning, which has the advantages of both methods, has recently advanced. In this study, pruning point training-based kernel pruning (PPT-KP) is proposed to address the problems of existing kernel pruning methods. With PPT-KP, the L1 norm of the kernel converges to zero through an adaptive regularizer that applies L1 regularization of different intensities depending on the size of the L1 norm of the kernel to secure network sparsity and obtain multiple margin spaces for pruning. Thus, outstanding kernel pruning is possible because several pruning points can be created. PPT-KP outperformed several existing filter pruning and kernel pruning methods on various networks and datasets in terms of the trade-off between FLOPs reduction and accuracy drops. In particular, PPT-KP reduced parameters and FLOPs by 77.2% and 68.9%, respectively, in ResNet-56 on the CIFAR-10 dataset with only a 0.05% accuracy degradation.