用于内存处理架构的灵活神经加速器

Ali Monavari Bidgoli, Sepideh Fattahi, Seyyed Hossein Seyyedaghaei Rezaei, M. Modarressi, M. Daneshtalab
{"title":"用于内存处理架构的灵活神经加速器","authors":"Ali Monavari Bidgoli, Sepideh Fattahi, Seyyed Hossein Seyyedaghaei Rezaei, M. Modarressi, M. Daneshtalab","doi":"10.1109/DDECS57882.2023.10139567","DOIUrl":null,"url":null,"abstract":"The performance of microprocessors under many modern workloads is mainly limited by the off-chip memory bandwidth. The emerging process-in-memory paradigm present a unique opportunity to reduce data movement overheads by moving computation closer to memory. State-of-the-art processing-in-memory proposals stack a logic layer on top of one or multiple memory layers in a 3D fashion and leverage the logic layer to build near-memory processing units. Such processing units are either application-specific accelerators or general-purpose cores. In this paper, we present NeuroPIM, a new processing-in-memory architecture that uses a neural network as the memory-side general-purpose accelerator. This design is mainly motivated by the observation that in many real-world applications, some program regions, or even the entire program, can be replaced by a neural network that is learned to approximate the program’s output. NeuroPIM benefits from both the flexibility of general-purpose processors and superior performance of application-specific accelerators. Experimental results show that NeuroPIM provides up to 41% speedup over a processor-side neural network accelerator and up to 8x speedup over a general-purpose processor.","PeriodicalId":220690,"journal":{"name":"2023 26th International Symposium on Design and Diagnostics of Electronic Circuits and Systems (DDECS)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"NeuroPIM: Felxible Neural Accelerator for Processing-in-Memory Architectures\",\"authors\":\"Ali Monavari Bidgoli, Sepideh Fattahi, Seyyed Hossein Seyyedaghaei Rezaei, M. Modarressi, M. Daneshtalab\",\"doi\":\"10.1109/DDECS57882.2023.10139567\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The performance of microprocessors under many modern workloads is mainly limited by the off-chip memory bandwidth. The emerging process-in-memory paradigm present a unique opportunity to reduce data movement overheads by moving computation closer to memory. State-of-the-art processing-in-memory proposals stack a logic layer on top of one or multiple memory layers in a 3D fashion and leverage the logic layer to build near-memory processing units. Such processing units are either application-specific accelerators or general-purpose cores. In this paper, we present NeuroPIM, a new processing-in-memory architecture that uses a neural network as the memory-side general-purpose accelerator. This design is mainly motivated by the observation that in many real-world applications, some program regions, or even the entire program, can be replaced by a neural network that is learned to approximate the program’s output. NeuroPIM benefits from both the flexibility of general-purpose processors and superior performance of application-specific accelerators. Experimental results show that NeuroPIM provides up to 41% speedup over a processor-side neural network accelerator and up to 8x speedup over a general-purpose processor.\",\"PeriodicalId\":220690,\"journal\":{\"name\":\"2023 26th International Symposium on Design and Diagnostics of Electronic Circuits and Systems (DDECS)\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 26th International Symposium on Design and Diagnostics of Electronic Circuits and Systems (DDECS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DDECS57882.2023.10139567\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 26th International Symposium on Design and Diagnostics of Electronic Circuits and Systems (DDECS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DDECS57882.2023.10139567","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在许多现代工作负载下,微处理器的性能主要受到片外存储器带宽的限制。正在出现的内存中的进程范式提供了一个独特的机会,通过将计算移到更靠近内存的位置来减少数据移动开销。最先进的内存处理建议以3D方式将逻辑层堆叠在一个或多个内存层之上,并利用逻辑层构建近内存处理单元。这些处理单元要么是特定于应用程序的加速器,要么是通用核心。在本文中,我们提出了一种新的内存处理架构NeuroPIM,它使用神经网络作为内存端通用加速器。这种设计的主要动机是观察到,在许多现实世界的应用中,一些程序区域,甚至整个程序,可以被学习来近似程序输出的神经网络所取代。NeuroPIM受益于通用处理器的灵活性和特定应用程序加速器的卓越性能。实验结果表明,与处理器端神经网络加速器相比,NeuroPIM提供了高达41%的加速,比通用处理器提供了高达8倍的加速。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
NeuroPIM: Felxible Neural Accelerator for Processing-in-Memory Architectures
The performance of microprocessors under many modern workloads is mainly limited by the off-chip memory bandwidth. The emerging process-in-memory paradigm present a unique opportunity to reduce data movement overheads by moving computation closer to memory. State-of-the-art processing-in-memory proposals stack a logic layer on top of one or multiple memory layers in a 3D fashion and leverage the logic layer to build near-memory processing units. Such processing units are either application-specific accelerators or general-purpose cores. In this paper, we present NeuroPIM, a new processing-in-memory architecture that uses a neural network as the memory-side general-purpose accelerator. This design is mainly motivated by the observation that in many real-world applications, some program regions, or even the entire program, can be replaced by a neural network that is learned to approximate the program’s output. NeuroPIM benefits from both the flexibility of general-purpose processors and superior performance of application-specific accelerators. Experimental results show that NeuroPIM provides up to 41% speedup over a processor-side neural network accelerator and up to 8x speedup over a general-purpose processor.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信