{"title":"基于灵敏度的网络压缩策略","authors":"Yihe Lu, Kaiyuan Feng, Hao Li, Yue Wu, Maoguo Gong, Yaoting Xu","doi":"10.1109/IAI50351.2020.9262195","DOIUrl":null,"url":null,"abstract":"The success of convolutional neural networks has contributed great improvement on various tasks. Typically, larger and deeper networks are designed to increase performance on classification works, such as AlexNet, VGGNets, GoogleNet and ResNets, which are composed by enormous convolutional filters. However, these complicated structures constrain models to be deployed into real application due to limited computational resources. In this paper, we propose a simple method to sort the importance of each convolutional layer, as well as compressing network by removing redundant filters. There are two implications in our work: 1) Downsizing the width of unimportant layers will lead to better performance on classification tasks. 2) The random-pruning in each convolutional layer can present similar results as weight-searching algorithm, which means that structure of the network dominates its ability of representation. As a result, we reveal the property of different convolutional layers for VGG-16, customized VGG-4 and ResNet-18 on different datasets. Consequently, the importance of different convolutional layers is described as sensitiveness to compress these networks greatly.","PeriodicalId":137183,"journal":{"name":"2020 2nd International Conference on Industrial Artificial Intelligence (IAI)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Sensitiveness Based Strategy for Network Compression\",\"authors\":\"Yihe Lu, Kaiyuan Feng, Hao Li, Yue Wu, Maoguo Gong, Yaoting Xu\",\"doi\":\"10.1109/IAI50351.2020.9262195\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The success of convolutional neural networks has contributed great improvement on various tasks. Typically, larger and deeper networks are designed to increase performance on classification works, such as AlexNet, VGGNets, GoogleNet and ResNets, which are composed by enormous convolutional filters. However, these complicated structures constrain models to be deployed into real application due to limited computational resources. In this paper, we propose a simple method to sort the importance of each convolutional layer, as well as compressing network by removing redundant filters. There are two implications in our work: 1) Downsizing the width of unimportant layers will lead to better performance on classification tasks. 2) The random-pruning in each convolutional layer can present similar results as weight-searching algorithm, which means that structure of the network dominates its ability of representation. As a result, we reveal the property of different convolutional layers for VGG-16, customized VGG-4 and ResNet-18 on different datasets. Consequently, the importance of different convolutional layers is described as sensitiveness to compress these networks greatly.\",\"PeriodicalId\":137183,\"journal\":{\"name\":\"2020 2nd International Conference on Industrial Artificial Intelligence (IAI)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 2nd International Conference on Industrial Artificial Intelligence (IAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IAI50351.2020.9262195\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 2nd International Conference on Industrial Artificial Intelligence (IAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IAI50351.2020.9262195","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Sensitiveness Based Strategy for Network Compression
The success of convolutional neural networks has contributed great improvement on various tasks. Typically, larger and deeper networks are designed to increase performance on classification works, such as AlexNet, VGGNets, GoogleNet and ResNets, which are composed by enormous convolutional filters. However, these complicated structures constrain models to be deployed into real application due to limited computational resources. In this paper, we propose a simple method to sort the importance of each convolutional layer, as well as compressing network by removing redundant filters. There are two implications in our work: 1) Downsizing the width of unimportant layers will lead to better performance on classification tasks. 2) The random-pruning in each convolutional layer can present similar results as weight-searching algorithm, which means that structure of the network dominates its ability of representation. As a result, we reveal the property of different convolutional layers for VGG-16, customized VGG-4 and ResNet-18 on different datasets. Consequently, the importance of different convolutional layers is described as sensitiveness to compress these networks greatly.