The C-CNN model: Do we really need multiplicative synapses in convolutional neural networks?

R. Dogaru, Adrian-Dumitru Mirica, I. Dogaru
{"title":"The C-CNN model: Do we really need multiplicative synapses in convolutional neural networks?","authors":"R. Dogaru, Adrian-Dumitru Mirica, I. Dogaru","doi":"10.1109/comm54429.2022.9817267","DOIUrl":null,"url":null,"abstract":"Comparative synapses are proposed and investigated in the context of convolutional neural networks as replacements for the traditional, multiplier-based synapses. A comparative synapse is an operator inspired from the min() operator used in fuzzy-logic a replacement for product to implement AND function. Its implementation complexity is linear in the number of bits unlike multipliers, requiring quadratic complexity. In effect, using a typical resolution of 8 bits the use of comparative synapse would reduce 8 times the number of hardware resources allocated for the operator. A C-CNN model was constructed to support comparative synapses and their update and error propagation rules. GPU acceleration of the C-CNN model was achieved using CUPY. The model was trained with several widely known image recognition datasets including MNIST, CIFAR and USPS. It turns out that functional performance (accuracy) is not dramatically affected in C-CNN against a similar traditional CNN model with multiplicative operators, thus opening an interesting implementation perspective, particularly for the TinyML and HW-oriented solutions with significant reduction in energy, silicon area and costs. The approach is scalable to more sophisticated CNN models providing adequate optimized operators adapted to this new synaptic model.","PeriodicalId":118077,"journal":{"name":"2022 14th International Conference on Communications (COMM)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 14th International Conference on Communications (COMM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/comm54429.2022.9817267","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Comparative synapses are proposed and investigated in the context of convolutional neural networks as replacements for the traditional, multiplier-based synapses. A comparative synapse is an operator inspired from the min() operator used in fuzzy-logic a replacement for product to implement AND function. Its implementation complexity is linear in the number of bits unlike multipliers, requiring quadratic complexity. In effect, using a typical resolution of 8 bits the use of comparative synapse would reduce 8 times the number of hardware resources allocated for the operator. A C-CNN model was constructed to support comparative synapses and their update and error propagation rules. GPU acceleration of the C-CNN model was achieved using CUPY. The model was trained with several widely known image recognition datasets including MNIST, CIFAR and USPS. It turns out that functional performance (accuracy) is not dramatically affected in C-CNN against a similar traditional CNN model with multiplicative operators, thus opening an interesting implementation perspective, particularly for the TinyML and HW-oriented solutions with significant reduction in energy, silicon area and costs. The approach is scalable to more sophisticated CNN models providing adequate optimized operators adapted to this new synaptic model.
C-CNN模型:我们真的需要卷积神经网络中的乘法突触吗?
在卷积神经网络的背景下,比较突触被提出和研究,作为传统的、基于乘数的突触的替代品。比较突触是一种从模糊逻辑中使用的min()算子启发而来的算子,用来代替product来实现AND函数。它的实现复杂度在位数上是线性的,不像乘法器那样需要二次复杂度。实际上,使用典型的8位分辨率,比较突触的使用将减少分配给操作员的硬件资源数量的8倍。构建了一个C-CNN模型来支持比较突触及其更新和错误传播规则。使用CUPY实现了C-CNN模型的GPU加速。该模型使用了包括MNIST、CIFAR和USPS在内的几个广为人知的图像识别数据集进行训练。事实证明,与具有乘法运算符的类似传统CNN模型相比,C-CNN的功能性能(准确性)没有受到显着影响,从而开辟了一个有趣的实现前景,特别是对于TinyML和面向hw的解决方案,可以显着减少能源,硅面积和成本。该方法可扩展到更复杂的CNN模型,提供适合这种新突触模型的适当优化算子。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信