FOCM: Faster Octave Convolution Using Mix-scaling

Kuan-Hsian Hsieh, Erh-Chung Chen, Che-Rung Lee
{"title":"FOCM: Faster Octave Convolution Using Mix-scaling","authors":"Kuan-Hsian Hsieh, Erh-Chung Chen, Che-Rung Lee","doi":"10.1109/taai54685.2021.00015","DOIUrl":null,"url":null,"abstract":"Octave convolution that separates the feature maps for different resolutions is an effective method to reduce the spatial redundancy in Convolution Neural Networks (CNN). In this paper, we propose a faster version of octave convolution, FOCM, which can further reduce the computation cost of CNNs. Similar to the octave convolution, FOCM divides the input and output feature maps into the domains of different resolutions, but without explicit information exchange among them. In addition, FOCM utilizes the mix-scaled convolution kernels to learn different sized spatial features. Experiments on various depth ResNet with ImageNet data-set have shown that FOCM can reduce 33.9% to 46.4% operations of the original models, and save 11.1% to 21.7% FLOPS of the models using octave convolutions, with similar top-1 and top-5 accuracy.","PeriodicalId":343821,"journal":{"name":"2021 International Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Technologies and Applications of Artificial Intelligence (TAAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/taai54685.2021.00015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Octave convolution that separates the feature maps for different resolutions is an effective method to reduce the spatial redundancy in Convolution Neural Networks (CNN). In this paper, we propose a faster version of octave convolution, FOCM, which can further reduce the computation cost of CNNs. Similar to the octave convolution, FOCM divides the input and output feature maps into the domains of different resolutions, but without explicit information exchange among them. In addition, FOCM utilizes the mix-scaled convolution kernels to learn different sized spatial features. Experiments on various depth ResNet with ImageNet data-set have shown that FOCM can reduce 33.9% to 46.4% operations of the original models, and save 11.1% to 21.7% FLOPS of the models using octave convolutions, with similar top-1 and top-5 accuracy.
FOCM:使用混合缩放更快的八度卷积
在卷积神经网络(CNN)中,分离不同分辨率的特征映射的八度卷积是一种有效的减少空间冗余的方法。在本文中,我们提出了一种更快版本的八度卷积,FOCM,这可以进一步降低cnn的计算成本。与八度卷积类似,FOCM将输入和输出特征映射划分为不同分辨率的域,但它们之间没有明确的信息交换。此外,FOCM利用混合尺度卷积核来学习不同大小的空间特征。使用ImageNet数据集在不同深度ResNet上的实验表明,FOCM可以将原始模型的运算减少33.9% ~ 46.4%,使用倍频卷积的模型可以节省11.1% ~ 21.7%的FLOPS, top-1和top-5的精度相似。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信