Memory-Centric Neuromorphic Computing With Nanodevices

D. Querlioz, J. Grollier, T. Hirtzlin, Jacques-Olivier Klein, E. Nowak, E. Vianello, M. Bocquet, J. Portal, M. Romera, P. Talatchian
{"title":"Memory-Centric Neuromorphic Computing With Nanodevices","authors":"D. Querlioz, J. Grollier, T. Hirtzlin, Jacques-Olivier Klein, E. Nowak, E. Vianello, M. Bocquet, J. Portal, M. Romera, P. Talatchian","doi":"10.1109/BIOCAS.2019.8919010","DOIUrl":null,"url":null,"abstract":"When performing artificial intelligence, CPUs and GPUs consume considerably more energy for moving data between logic and memory units than for doing arithmetic. Brains, by contrast, achieve superior energy efficiency by fusing logic and memory entirely. Currently, emerging memory nanodevices give us an opportunity to reproduce this concept. In this overview paper, we look at neuroscience inspiration to extract lessons on the design of memory-centric neuromorphic systems. We study the reliance of brains on approximate memory strategies, which can be translated to AI. We give the example of a hardware binarized neural network with resistive memory. Based on measurements on a hybrid CMOS/resistive memory chip, we see that such systems can exploit the properties of emerging memories without error correction, and achieve extremely high energy efficiency. Second, we see that brains use the physics of their memory devices in a way much richer than only storage. This can inspire radical electronic designs, where memory devices become a core part of computing. We have for example fabricated neural networks where magnetic memories are used as nonlinear oscillators to implement neurons, and their electrical couplings implement synapses. Such designs can harness the rich physics of nanodevices, without suffering from their drawbacks.","PeriodicalId":222264,"journal":{"name":"2019 IEEE Biomedical Circuits and Systems Conference (BioCAS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Biomedical Circuits and Systems Conference (BioCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BIOCAS.2019.8919010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

When performing artificial intelligence, CPUs and GPUs consume considerably more energy for moving data between logic and memory units than for doing arithmetic. Brains, by contrast, achieve superior energy efficiency by fusing logic and memory entirely. Currently, emerging memory nanodevices give us an opportunity to reproduce this concept. In this overview paper, we look at neuroscience inspiration to extract lessons on the design of memory-centric neuromorphic systems. We study the reliance of brains on approximate memory strategies, which can be translated to AI. We give the example of a hardware binarized neural network with resistive memory. Based on measurements on a hybrid CMOS/resistive memory chip, we see that such systems can exploit the properties of emerging memories without error correction, and achieve extremely high energy efficiency. Second, we see that brains use the physics of their memory devices in a way much richer than only storage. This can inspire radical electronic designs, where memory devices become a core part of computing. We have for example fabricated neural networks where magnetic memories are used as nonlinear oscillators to implement neurons, and their electrical couplings implement synapses. Such designs can harness the rich physics of nanodevices, without suffering from their drawbacks.
以纳米器件为中心的记忆神经形态计算
在执行人工智能时,cpu和gpu在逻辑和内存单元之间移动数据所消耗的能量要比执行算术所消耗的能量多得多。相比之下,大脑通过完全融合逻辑和记忆来实现卓越的能量效率。目前,新兴的存储纳米器件给了我们重现这一概念的机会。在这篇综述论文中,我们着眼于神经科学的灵感,以提取记忆中心神经形态系统设计的经验教训。我们研究了大脑对近似记忆策略的依赖,这可以转化为人工智能。我们给出了一个具有电阻记忆的硬件二值化神经网络的例子。基于对混合CMOS/电阻存储芯片的测量,我们看到这种系统可以利用新兴存储器的特性而无需纠错,并实现极高的能源效率。其次,我们发现大脑利用其存储设备的物理特性的方式要比存储丰富得多。这可以激发激进的电子设计,其中存储设备成为计算的核心部分。例如,我们制造了神经网络,其中磁记忆被用作非线性振荡器来实现神经元,它们的电耦合实现突触。这样的设计可以利用纳米器件丰富的物理特性,而不会遭受它们的缺点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信