{"title":"基于结构搜索和中间表示的网络剪枝","authors":"Dai Xuanhui, Chen Juan, Wen Quan","doi":"10.1109/ICCWAMTIP53232.2021.9674132","DOIUrl":null,"url":null,"abstract":"Network pruning is widely used for compressing large neural networks to save computational resources. In traditional pruning methods, predefined hyperparameters are often required to determine the network structure of the target small network. However, too many hyperparameters are often undesirable. Therefore, we use the transformable architecture search (TAS) method to dynamically search the network structure of each layer when pruning the network width. In the TAS method, the channels number of the pruned network in each layer is represented by a learnable probability distribution. By minimizing computation cost, the probability distribution can be calculated and used to get the width configuration of the target pruned network. Then, the depth of the network was compressed based on the model get in the previous step. The method for compressing depth is block-wise intermediate representation training. This method is based on the hint training, where the network depth is compressed by comparing the intermediate representation of each layer of two equally wide teacher and student models. In the experiments, about 0.4% improvement over the existing method was viewed for the ResNet network on both CIFAR10 and CIFAR100 datasets.","PeriodicalId":358772,"journal":{"name":"2021 18th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Network Pruning Based On Architecture Search and Intermediate Representation\",\"authors\":\"Dai Xuanhui, Chen Juan, Wen Quan\",\"doi\":\"10.1109/ICCWAMTIP53232.2021.9674132\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Network pruning is widely used for compressing large neural networks to save computational resources. In traditional pruning methods, predefined hyperparameters are often required to determine the network structure of the target small network. However, too many hyperparameters are often undesirable. Therefore, we use the transformable architecture search (TAS) method to dynamically search the network structure of each layer when pruning the network width. In the TAS method, the channels number of the pruned network in each layer is represented by a learnable probability distribution. By minimizing computation cost, the probability distribution can be calculated and used to get the width configuration of the target pruned network. Then, the depth of the network was compressed based on the model get in the previous step. The method for compressing depth is block-wise intermediate representation training. This method is based on the hint training, where the network depth is compressed by comparing the intermediate representation of each layer of two equally wide teacher and student models. In the experiments, about 0.4% improvement over the existing method was viewed for the ResNet network on both CIFAR10 and CIFAR100 datasets.\",\"PeriodicalId\":358772,\"journal\":{\"name\":\"2021 18th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP)\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 18th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCWAMTIP53232.2021.9674132\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 18th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCWAMTIP53232.2021.9674132","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Network Pruning Based On Architecture Search and Intermediate Representation
Network pruning is widely used for compressing large neural networks to save computational resources. In traditional pruning methods, predefined hyperparameters are often required to determine the network structure of the target small network. However, too many hyperparameters are often undesirable. Therefore, we use the transformable architecture search (TAS) method to dynamically search the network structure of each layer when pruning the network width. In the TAS method, the channels number of the pruned network in each layer is represented by a learnable probability distribution. By minimizing computation cost, the probability distribution can be calculated and used to get the width configuration of the target pruned network. Then, the depth of the network was compressed based on the model get in the previous step. The method for compressing depth is block-wise intermediate representation training. This method is based on the hint training, where the network depth is compressed by comparing the intermediate representation of each layer of two equally wide teacher and student models. In the experiments, about 0.4% improvement over the existing method was viewed for the ResNet network on both CIFAR10 and CIFAR100 datasets.