A Computationally Efficient Neural Network For Faster Image Classification

Ananya Paul, L. TejpratapG.V.S.
{"title":"A Computationally Efficient Neural Network For Faster Image Classification","authors":"Ananya Paul, L. TejpratapG.V.S.","doi":"10.1109/SSCI.2018.8628751","DOIUrl":null,"url":null,"abstract":"Deep Convolutional Neural Networks have led to series of breakthroughs in image classification. With increasing demand to run DCNN based models on mobile platforms with minimal computing capabilities and lesser storage space, the challenge is optimizing those DCNN models for lesser computation and smaller memory footprint. This paper presents a highly efficient and modularized Deep Neural Network (DNN) model for image classification, which outperforms state of the art models in terms of both speed and accuracy. The proposed DNN model is constructed by repeating a building block that aggregates a set of transformations with the same topology. In order to make a lighter model, it uses Depthwise Separable convolution, Grouped convolution and identity shortcut connections. It reduces computations approximately by 100M FLOPs in comparison to MobileNet with a slight improvement in accuracy when validated on CIFAR-10, CIFAR-100 and Caltech-256 datasets.","PeriodicalId":235735,"journal":{"name":"2018 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Symposium Series on Computational Intelligence (SSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSCI.2018.8628751","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deep Convolutional Neural Networks have led to series of breakthroughs in image classification. With increasing demand to run DCNN based models on mobile platforms with minimal computing capabilities and lesser storage space, the challenge is optimizing those DCNN models for lesser computation and smaller memory footprint. This paper presents a highly efficient and modularized Deep Neural Network (DNN) model for image classification, which outperforms state of the art models in terms of both speed and accuracy. The proposed DNN model is constructed by repeating a building block that aggregates a set of transformations with the same topology. In order to make a lighter model, it uses Depthwise Separable convolution, Grouped convolution and identity shortcut connections. It reduces computations approximately by 100M FLOPs in comparison to MobileNet with a slight improvement in accuracy when validated on CIFAR-10, CIFAR-100 and Caltech-256 datasets.
一种计算效率高的快速图像分类神经网络
深度卷积神经网络在图像分类领域取得了一系列突破。随着在计算能力和存储空间最小的移动平台上运行基于DCNN的模型的需求不断增加,挑战在于优化这些DCNN模型,以实现更少的计算和更小的内存占用。本文提出了一种高效、模块化的深度神经网络(DNN)图像分类模型,该模型在速度和精度方面都优于现有模型。提出的深度神经网络模型是通过重复构建块来构建的,该构建块聚合了具有相同拓扑结构的一组转换。为了使模型更轻,使用了深度可分卷积、分组卷积和身份快捷连接。在CIFAR-10、CIFAR-100和Caltech-256数据集上进行验证时,与MobileNet相比,它减少了大约100兆FLOPs的计算量,精度略有提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信