{"title":"网络剪枝中动态标签的研究","authors":"Lijun Zhang, Yaomin Luo, Shiqi Xie, Xiucheng Wu","doi":"10.1109/ICPS58381.2023.10128021","DOIUrl":null,"url":null,"abstract":"Convolutional neural network compression technology plays an extremely important role in model transplantation and deployment, especially in mobile and embedded hardware platforms with small memory and low computing power, compression technology is even more critical. Convolutional neural network channel pruning technology has developed rapidly in recent years, and a number of excellent pruning algorithms have emerged. The channel pruning technology has gradually developed from the earliest static pruning to dynamic pruning, which adopts different pruning schemes for different inputs. However, the current dynamic pruning scheme needs to introduce multiple modules to predict the mask to prune the feature maps, and some schemes also introduce multiple hyperparameters in the loss function to balance the model accuracy and pruning rate, which leads to The model has difficulty converging during training. We propose a dynamic pruning method, each convolution structure configures a simple prediction module, and generating dynamic labels through the input's norm and similarity to guide the prediction module training, which will not bring new parameters to the loss function. We conducted related experiments on multiple models on the Cifar10 datasets. The experiments on ResNet56 show that our scheme is 1.3% higher than the most advanced scheme in terms of compression rate under the premise of the same accuracy.","PeriodicalId":426122,"journal":{"name":"2023 IEEE 6th International Conference on Industrial Cyber-Physical Systems (ICPS)","volume":"401 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Research on Dynamic Labels in Network Pruning\",\"authors\":\"Lijun Zhang, Yaomin Luo, Shiqi Xie, Xiucheng Wu\",\"doi\":\"10.1109/ICPS58381.2023.10128021\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Convolutional neural network compression technology plays an extremely important role in model transplantation and deployment, especially in mobile and embedded hardware platforms with small memory and low computing power, compression technology is even more critical. Convolutional neural network channel pruning technology has developed rapidly in recent years, and a number of excellent pruning algorithms have emerged. The channel pruning technology has gradually developed from the earliest static pruning to dynamic pruning, which adopts different pruning schemes for different inputs. However, the current dynamic pruning scheme needs to introduce multiple modules to predict the mask to prune the feature maps, and some schemes also introduce multiple hyperparameters in the loss function to balance the model accuracy and pruning rate, which leads to The model has difficulty converging during training. We propose a dynamic pruning method, each convolution structure configures a simple prediction module, and generating dynamic labels through the input's norm and similarity to guide the prediction module training, which will not bring new parameters to the loss function. We conducted related experiments on multiple models on the Cifar10 datasets. The experiments on ResNet56 show that our scheme is 1.3% higher than the most advanced scheme in terms of compression rate under the premise of the same accuracy.\",\"PeriodicalId\":426122,\"journal\":{\"name\":\"2023 IEEE 6th International Conference on Industrial Cyber-Physical Systems (ICPS)\",\"volume\":\"401 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 6th International Conference on Industrial Cyber-Physical Systems (ICPS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPS58381.2023.10128021\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 6th International Conference on Industrial Cyber-Physical Systems (ICPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPS58381.2023.10128021","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Convolutional neural network compression technology plays an extremely important role in model transplantation and deployment, especially in mobile and embedded hardware platforms with small memory and low computing power, compression technology is even more critical. Convolutional neural network channel pruning technology has developed rapidly in recent years, and a number of excellent pruning algorithms have emerged. The channel pruning technology has gradually developed from the earliest static pruning to dynamic pruning, which adopts different pruning schemes for different inputs. However, the current dynamic pruning scheme needs to introduce multiple modules to predict the mask to prune the feature maps, and some schemes also introduce multiple hyperparameters in the loss function to balance the model accuracy and pruning rate, which leads to The model has difficulty converging during training. We propose a dynamic pruning method, each convolution structure configures a simple prediction module, and generating dynamic labels through the input's norm and similarity to guide the prediction module training, which will not bring new parameters to the loss function. We conducted related experiments on multiple models on the Cifar10 datasets. The experiments on ResNet56 show that our scheme is 1.3% higher than the most advanced scheme in terms of compression rate under the premise of the same accuracy.