超越传统数据存储的新兴存储设备:为高效节能的大脑启发计算铺平道路

IF 1.7 Q4 ELECTROCHEMISTRY
R. Jha
{"title":"超越传统数据存储的新兴存储设备:为高效节能的大脑启发计算铺平道路","authors":"R. Jha","doi":"10.1149/2.f10231if","DOIUrl":null,"url":null,"abstract":"The current state of neuromorphic computing broadly encompasses domain-specific computing architectures designed to accelerate machine learning (ML) and artificial intelligence (AI) algorithms. As is well known, AI/ML algorithms are limited by memory bandwidth. Novel computing architectures are necessary to overcome this limitation. There are several options that are currently under investigation using both mature and emerging memory technologies. For example, mature memory technologies such as high-bandwidth memories (HBMs) are integrated with logic units on the same die to bring memory closer to the computing units. There are also research efforts where in-memory computing architectures have been implemented using DRAMs or flash memory technologies. However, DRAMs suffer from scaling limitations, while flash memory devices suffer from endurance issues. Additionally, in spite of this significant progress, the massive energy consumption needed in neuromorphic processors while meeting the required training and inferencing performance for AI/ML algorithms for future applications needs to be addressed. On the AI/ML algorithm side, there are several pending issues such as life-long learning, explainability, context-based decision making, multimodal association of data, adaptation to address personalized responses, and resiliency. These unresolved challenges in AI/ML have led researchers to explore brain-inspired computing architectures and paradigms.","PeriodicalId":47157,"journal":{"name":"Electrochemical Society Interface","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Emerging Memory Devices Beyond Conventional Data Storage: Paving the Path for Energy-Efficient Brain-Inspired Computing\",\"authors\":\"R. Jha\",\"doi\":\"10.1149/2.f10231if\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The current state of neuromorphic computing broadly encompasses domain-specific computing architectures designed to accelerate machine learning (ML) and artificial intelligence (AI) algorithms. As is well known, AI/ML algorithms are limited by memory bandwidth. Novel computing architectures are necessary to overcome this limitation. There are several options that are currently under investigation using both mature and emerging memory technologies. For example, mature memory technologies such as high-bandwidth memories (HBMs) are integrated with logic units on the same die to bring memory closer to the computing units. There are also research efforts where in-memory computing architectures have been implemented using DRAMs or flash memory technologies. However, DRAMs suffer from scaling limitations, while flash memory devices suffer from endurance issues. Additionally, in spite of this significant progress, the massive energy consumption needed in neuromorphic processors while meeting the required training and inferencing performance for AI/ML algorithms for future applications needs to be addressed. On the AI/ML algorithm side, there are several pending issues such as life-long learning, explainability, context-based decision making, multimodal association of data, adaptation to address personalized responses, and resiliency. These unresolved challenges in AI/ML have led researchers to explore brain-inspired computing architectures and paradigms.\",\"PeriodicalId\":47157,\"journal\":{\"name\":\"Electrochemical Society Interface\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2023-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Electrochemical Society Interface\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1149/2.f10231if\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ELECTROCHEMISTRY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Electrochemical Society Interface","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1149/2.f10231if","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ELECTROCHEMISTRY","Score":null,"Total":0}
引用次数: 0

摘要

神经形态计算的当前状态广泛包括特定领域的计算架构,旨在加速机器学习(ML)和人工智能(AI)算法。众所周知,AI/ML算法受到内存带宽的限制。克服这一限制需要新颖的计算体系结构。目前有几种选择正在使用成熟和新兴的存储技术进行研究。例如,成熟的存储器技术,如高带宽存储器(HBMs)与逻辑单元集成在同一个芯片上,使存储器更接近计算单元。也有研究工作,在内存计算架构已经实现使用dram或闪存技术。然而,dram受到缩放限制,而闪存设备则受到持久问题的困扰。此外,尽管取得了这一重大进展,但在满足未来应用中AI/ML算法所需的训练和推理性能的同时,神经形态处理器所需的大量能量消耗需要得到解决。在AI/ML算法方面,有几个悬而未决的问题,如终身学习、可解释性、基于上下文的决策、数据的多模态关联、解决个性化响应的适应性和弹性。AI/ML中这些尚未解决的挑战促使研究人员探索大脑启发的计算架构和范式。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Emerging Memory Devices Beyond Conventional Data Storage: Paving the Path for Energy-Efficient Brain-Inspired Computing
The current state of neuromorphic computing broadly encompasses domain-specific computing architectures designed to accelerate machine learning (ML) and artificial intelligence (AI) algorithms. As is well known, AI/ML algorithms are limited by memory bandwidth. Novel computing architectures are necessary to overcome this limitation. There are several options that are currently under investigation using both mature and emerging memory technologies. For example, mature memory technologies such as high-bandwidth memories (HBMs) are integrated with logic units on the same die to bring memory closer to the computing units. There are also research efforts where in-memory computing architectures have been implemented using DRAMs or flash memory technologies. However, DRAMs suffer from scaling limitations, while flash memory devices suffer from endurance issues. Additionally, in spite of this significant progress, the massive energy consumption needed in neuromorphic processors while meeting the required training and inferencing performance for AI/ML algorithms for future applications needs to be addressed. On the AI/ML algorithm side, there are several pending issues such as life-long learning, explainability, context-based decision making, multimodal association of data, adaptation to address personalized responses, and resiliency. These unresolved challenges in AI/ML have led researchers to explore brain-inspired computing architectures and paradigms.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.10
自引率
5.60%
发文量
62
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信