条带化输入特征映射缓存以减少CNN加速器的片外内存流量

Q3 Engineering
R. Struharik, Vuk Vranjkovic
{"title":"条带化输入特征映射缓存以减少CNN加速器的片外内存流量","authors":"R. Struharik, Vuk Vranjkovic","doi":"10.5937/TELFOR2002116S","DOIUrl":null,"url":null,"abstract":"Data movement between the Convolutional Neural Network (CNN) accelerators and off-chip memory is critical concerning the overall power consumption. Minimizing power consumption is particularly important for low power embedded applications. Specific CNN computes patterns offer a possibility of significant data reuse, leading to the idea of using specialized on-chip cache memories which enable a significant improvement in power consumption. However, due to the unique caching pattern present within CNNs, standard cache memories would not be efficient. In this paper, a novel on-chip cache memory architecture, based on the idea of input feature map striping, is proposed, which requires significantly less on-chip memory resources compared to previously proposed solutions. Experiment results show that the proposed cache architecture can reduce on-chip memory size by a factor of 16 or more, while increasing power consumption no more than 15%, compared to some of the previously proposed solutions.","PeriodicalId":37719,"journal":{"name":"Telfor Journal","volume":"41 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Striping input feature map cache for reducing off-chip memory traffic in CNN accelerators\",\"authors\":\"R. Struharik, Vuk Vranjkovic\",\"doi\":\"10.5937/TELFOR2002116S\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Data movement between the Convolutional Neural Network (CNN) accelerators and off-chip memory is critical concerning the overall power consumption. Minimizing power consumption is particularly important for low power embedded applications. Specific CNN computes patterns offer a possibility of significant data reuse, leading to the idea of using specialized on-chip cache memories which enable a significant improvement in power consumption. However, due to the unique caching pattern present within CNNs, standard cache memories would not be efficient. In this paper, a novel on-chip cache memory architecture, based on the idea of input feature map striping, is proposed, which requires significantly less on-chip memory resources compared to previously proposed solutions. Experiment results show that the proposed cache architecture can reduce on-chip memory size by a factor of 16 or more, while increasing power consumption no more than 15%, compared to some of the previously proposed solutions.\",\"PeriodicalId\":37719,\"journal\":{\"name\":\"Telfor Journal\",\"volume\":\"41 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Telfor Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5937/TELFOR2002116S\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Engineering\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Telfor Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5937/TELFOR2002116S","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 0

摘要

卷积神经网络(CNN)加速器和片外存储器之间的数据移动对整体功耗至关重要。最小化功耗对于低功耗嵌入式应用尤为重要。特定的CNN计算模式提供了重要数据重用的可能性,从而产生了使用专门的片上缓存存储器的想法,从而大大提高了功耗。然而,由于cnn中存在独特的缓存模式,标准缓存内存效率不高。本文提出了一种基于输入特征映射条带化思想的片上缓存架构,与之前提出的解决方案相比,该架构所需的片上内存资源显著减少。实验结果表明,与先前提出的一些解决方案相比,所提出的缓存架构可以将片上存储器大小减少16倍或更多,而功耗增加不超过15%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Striping input feature map cache for reducing off-chip memory traffic in CNN accelerators
Data movement between the Convolutional Neural Network (CNN) accelerators and off-chip memory is critical concerning the overall power consumption. Minimizing power consumption is particularly important for low power embedded applications. Specific CNN computes patterns offer a possibility of significant data reuse, leading to the idea of using specialized on-chip cache memories which enable a significant improvement in power consumption. However, due to the unique caching pattern present within CNNs, standard cache memories would not be efficient. In this paper, a novel on-chip cache memory architecture, based on the idea of input feature map striping, is proposed, which requires significantly less on-chip memory resources compared to previously proposed solutions. Experiment results show that the proposed cache architecture can reduce on-chip memory size by a factor of 16 or more, while increasing power consumption no more than 15%, compared to some of the previously proposed solutions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Telfor Journal
Telfor Journal Engineering-Media Technology
CiteScore
1.50
自引率
0.00%
发文量
8
审稿时长
23 weeks
期刊介绍: The TELFOR Journal is an open access international scientific journal publishing improved and extended versions of the selected best papers initially reported at the annual TELFOR Conference (www.telfor.rs), papers invited by the Editorial Board, and papers submitted by authors themselves for publishing. All papers are subject to reviewing. The TELFOR Journal is published in the English language, with both electronic and printed versions. Being an IEEE co-supported publication, it will follow all the IEEE rules and procedures. The TELFOR Journal covers all the essential branches of modern telecommunications and information technology: Telecommunications Policy and Services, Telecommunications Networks, Radio Communications, Communications Systems, Signal Processing, Optical Communications, Applied Electromagnetics, Applied Electronics, Multimedia, Software Tools and Applications, as well as other fields related to ICT. This large spectrum of topics accounts for the rapid convergence through telecommunications of the underlying technologies towards the information and knowledge society. The Journal provides a medium for exchanging research results and technological achievements accomplished by the scientific community from academia and industry.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信