CluSpa:利用聚类和稀疏性减少CNN推理的计算量

Imlijungla Longchar, Amey Varhade, Chetan Ingle, Saurabh Baranwal, H. Kapoor
{"title":"CluSpa:利用聚类和稀疏性减少CNN推理的计算量","authors":"Imlijungla Longchar, Amey Varhade, Chetan Ingle, Saurabh Baranwal, H. Kapoor","doi":"10.1145/3564121.3564132","DOIUrl":null,"url":null,"abstract":"Convolutional Neural Networks (CNNs) have grown in popularity and usage tremendously over the last few years, spanning across different task such as computer vision tasks, natural language processing, video recognition, and recommender systems. Despite the algorithmic advancements that drove the growth of CNN still has considerable computational and memory overhead that poses challenges in achieving real-time performance. Each input image requires millions to even billions of elementary arithmetic operations before the network obtains the result. In CNNs, convolutional and pooling layers are followed by activation layers involving various activation functions. Hence, a lot of work has been done to reduce these costs in the last few years. Numerous optimizations have addressed at both hardware and software levels. In this paper, we propose a software-based solution for improving the performance of inference of networks. We suggest a technique for the approximate computation of the convolution operation based on clustering and sharing of weights. We have utilized Gaussian Mixture Models for clustering. We exploit weight sparsity to further reduce computations on top of the clustering method. We were able to achieve a considerable reduction in the MAC operations and the overall computation speedup on popular CNN architectures","PeriodicalId":166150,"journal":{"name":"Proceedings of the Second International Conference on AI-ML Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CluSpa: Computation Reduction in CNN Inference by exploiting Clustering and Sparsity\",\"authors\":\"Imlijungla Longchar, Amey Varhade, Chetan Ingle, Saurabh Baranwal, H. Kapoor\",\"doi\":\"10.1145/3564121.3564132\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Convolutional Neural Networks (CNNs) have grown in popularity and usage tremendously over the last few years, spanning across different task such as computer vision tasks, natural language processing, video recognition, and recommender systems. Despite the algorithmic advancements that drove the growth of CNN still has considerable computational and memory overhead that poses challenges in achieving real-time performance. Each input image requires millions to even billions of elementary arithmetic operations before the network obtains the result. In CNNs, convolutional and pooling layers are followed by activation layers involving various activation functions. Hence, a lot of work has been done to reduce these costs in the last few years. Numerous optimizations have addressed at both hardware and software levels. In this paper, we propose a software-based solution for improving the performance of inference of networks. We suggest a technique for the approximate computation of the convolution operation based on clustering and sharing of weights. We have utilized Gaussian Mixture Models for clustering. We exploit weight sparsity to further reduce computations on top of the clustering method. We were able to achieve a considerable reduction in the MAC operations and the overall computation speedup on popular CNN architectures\",\"PeriodicalId\":166150,\"journal\":{\"name\":\"Proceedings of the Second International Conference on AI-ML Systems\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Second International Conference on AI-ML Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3564121.3564132\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Second International Conference on AI-ML Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3564121.3564132","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

卷积神经网络(cnn)在过去几年中得到了极大的普及和使用,跨越了不同的任务,如计算机视觉任务、自然语言处理、视频识别和推荐系统。尽管算法的进步推动了CNN的增长,但它仍然有相当大的计算和内存开销,这给实现实时性能带来了挑战。在网络获得结果之前,每个输入图像需要数百万甚至数十亿的基本算术运算。在cnn中,卷积层和池化层之后是涉及各种激活函数的激活层。因此,在过去的几年里,人们做了很多工作来降低这些成本。在硬件和软件级别都进行了许多优化。在本文中,我们提出了一种基于软件的解决方案来提高网络的推理性能。我们提出了一种基于聚类和权值共享的卷积运算近似计算技术。我们使用高斯混合模型进行聚类。我们利用权值稀疏性在聚类方法的基础上进一步减少计算量。我们能够在流行的CNN架构上实现MAC操作的大幅减少和整体计算速度的提高
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CluSpa: Computation Reduction in CNN Inference by exploiting Clustering and Sparsity
Convolutional Neural Networks (CNNs) have grown in popularity and usage tremendously over the last few years, spanning across different task such as computer vision tasks, natural language processing, video recognition, and recommender systems. Despite the algorithmic advancements that drove the growth of CNN still has considerable computational and memory overhead that poses challenges in achieving real-time performance. Each input image requires millions to even billions of elementary arithmetic operations before the network obtains the result. In CNNs, convolutional and pooling layers are followed by activation layers involving various activation functions. Hence, a lot of work has been done to reduce these costs in the last few years. Numerous optimizations have addressed at both hardware and software levels. In this paper, we propose a software-based solution for improving the performance of inference of networks. We suggest a technique for the approximate computation of the convolution operation based on clustering and sharing of weights. We have utilized Gaussian Mixture Models for clustering. We exploit weight sparsity to further reduce computations on top of the clustering method. We were able to achieve a considerable reduction in the MAC operations and the overall computation speedup on popular CNN architectures
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信