基于灵敏度的网络压缩策略

Yihe Lu, Kaiyuan Feng, Hao Li, Yue Wu, Maoguo Gong, Yaoting Xu
{"title":"基于灵敏度的网络压缩策略","authors":"Yihe Lu, Kaiyuan Feng, Hao Li, Yue Wu, Maoguo Gong, Yaoting Xu","doi":"10.1109/IAI50351.2020.9262195","DOIUrl":null,"url":null,"abstract":"The success of convolutional neural networks has contributed great improvement on various tasks. Typically, larger and deeper networks are designed to increase performance on classification works, such as AlexNet, VGGNets, GoogleNet and ResNets, which are composed by enormous convolutional filters. However, these complicated structures constrain models to be deployed into real application due to limited computational resources. In this paper, we propose a simple method to sort the importance of each convolutional layer, as well as compressing network by removing redundant filters. There are two implications in our work: 1) Downsizing the width of unimportant layers will lead to better performance on classification tasks. 2) The random-pruning in each convolutional layer can present similar results as weight-searching algorithm, which means that structure of the network dominates its ability of representation. As a result, we reveal the property of different convolutional layers for VGG-16, customized VGG-4 and ResNet-18 on different datasets. Consequently, the importance of different convolutional layers is described as sensitiveness to compress these networks greatly.","PeriodicalId":137183,"journal":{"name":"2020 2nd International Conference on Industrial Artificial Intelligence (IAI)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Sensitiveness Based Strategy for Network Compression\",\"authors\":\"Yihe Lu, Kaiyuan Feng, Hao Li, Yue Wu, Maoguo Gong, Yaoting Xu\",\"doi\":\"10.1109/IAI50351.2020.9262195\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The success of convolutional neural networks has contributed great improvement on various tasks. Typically, larger and deeper networks are designed to increase performance on classification works, such as AlexNet, VGGNets, GoogleNet and ResNets, which are composed by enormous convolutional filters. However, these complicated structures constrain models to be deployed into real application due to limited computational resources. In this paper, we propose a simple method to sort the importance of each convolutional layer, as well as compressing network by removing redundant filters. There are two implications in our work: 1) Downsizing the width of unimportant layers will lead to better performance on classification tasks. 2) The random-pruning in each convolutional layer can present similar results as weight-searching algorithm, which means that structure of the network dominates its ability of representation. As a result, we reveal the property of different convolutional layers for VGG-16, customized VGG-4 and ResNet-18 on different datasets. Consequently, the importance of different convolutional layers is described as sensitiveness to compress these networks greatly.\",\"PeriodicalId\":137183,\"journal\":{\"name\":\"2020 2nd International Conference on Industrial Artificial Intelligence (IAI)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 2nd International Conference on Industrial Artificial Intelligence (IAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IAI50351.2020.9262195\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 2nd International Conference on Industrial Artificial Intelligence (IAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IAI50351.2020.9262195","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

卷积神经网络的成功为各种任务的改进做出了巨大贡献。通常,更大、更深的网络是为了提高分类工作的性能而设计的,比如AlexNet、VGGNets、GoogleNet和ResNets,它们由巨大的卷积过滤器组成。然而,由于计算资源有限,这些复杂的结构限制了模型在实际应用中的部署。在本文中,我们提出了一种简单的方法来排序每个卷积层的重要性,并通过去除冗余滤波器来压缩网络。在我们的工作中有两个含义:1)缩小不重要层的宽度将导致分类任务的更好性能。2)每个卷积层的随机修剪可以得到与权重搜索算法相似的结果,这意味着网络的结构决定了它的表示能力。结果揭示了VGG-16、定制VGG-4和ResNet-18在不同数据集上的不同卷积层的性质。因此,不同卷积层的重要性被描述为极大地压缩这些网络的敏感性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Sensitiveness Based Strategy for Network Compression
The success of convolutional neural networks has contributed great improvement on various tasks. Typically, larger and deeper networks are designed to increase performance on classification works, such as AlexNet, VGGNets, GoogleNet and ResNets, which are composed by enormous convolutional filters. However, these complicated structures constrain models to be deployed into real application due to limited computational resources. In this paper, we propose a simple method to sort the importance of each convolutional layer, as well as compressing network by removing redundant filters. There are two implications in our work: 1) Downsizing the width of unimportant layers will lead to better performance on classification tasks. 2) The random-pruning in each convolutional layer can present similar results as weight-searching algorithm, which means that structure of the network dominates its ability of representation. As a result, we reveal the property of different convolutional layers for VGG-16, customized VGG-4 and ResNet-18 on different datasets. Consequently, the importance of different convolutional layers is described as sensitiveness to compress these networks greatly.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信