Hyperdimensional computing with 3D VRRAM in-memory kernels: Device-architecture co-design for energy-efficient, error-resilient language recognition

Haitong Li, Tony F. Wu, Abbas Rahimi, Kai-Shin Li, M. Rusch, Chang-Hsien Lin, Juo-Luen Hsu, M. Sabry, S. Eryilmaz, Joon Sohn, W. Chiu, Min-Cheng Chen, Tsung-Ta Wu, J. Shieh, W. Yeh, J. Rabaey, S. Mitra, H. Wong
{"title":"Hyperdimensional computing with 3D VRRAM in-memory kernels: Device-architecture co-design for energy-efficient, error-resilient language recognition","authors":"Haitong Li, Tony F. Wu, Abbas Rahimi, Kai-Shin Li, M. Rusch, Chang-Hsien Lin, Juo-Luen Hsu, M. Sabry, S. Eryilmaz, Joon Sohn, W. Chiu, Min-Cheng Chen, Tsung-Ta Wu, J. Shieh, W. Yeh, J. Rabaey, S. Mitra, H. Wong","doi":"10.1109/IEDM.2016.7838428","DOIUrl":null,"url":null,"abstract":"The ability to learn from few examples, known as one-shot learning, is a hallmark of human cognition. Hyperdimensional (HD) computing is a brain-inspired computational framework capable of one-shot learning, using random binary vectors with high dimensionality. Device-architecture co-design of HD cognitive computing systems using 3D VRRAM/CMOS is presented for language recognition. Multiplication-addition-permutation (MAP), the central operations of HD computing, are experimentally demonstrated on 4-layer 3D VRRAM/FinFET as non-volatile in-memory MAP kernels. Extensive cycle-to-cycle (up to 1012 cycles) and wafer-level device-to-device (256 RRAMs) experiments are performed to validate reproducibility and robustness. For 28-nm node, the 3D in-memory architecture reduces total energy consumption by 52.2% with 412 times less area compared with LP digital design (using registers as memory), owing to the energy-efficient VRRAM MAP kernels and dense connectivity. Meanwhile, the system trained with 21 samples texts achieves 90.4% accuracy recognizing 21 European languages on 21,000 test sentences. Hard-error analysis shows the HD architecture is amazingly resilient to RRAM endurance failures, making the use of various types of RRAMs/CBRAMs (1k ∼ 10M endurance) feasible.","PeriodicalId":186544,"journal":{"name":"2016 IEEE International Electron Devices Meeting (IEDM)","volume":"361 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"101","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Electron Devices Meeting (IEDM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IEDM.2016.7838428","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 101

Abstract

The ability to learn from few examples, known as one-shot learning, is a hallmark of human cognition. Hyperdimensional (HD) computing is a brain-inspired computational framework capable of one-shot learning, using random binary vectors with high dimensionality. Device-architecture co-design of HD cognitive computing systems using 3D VRRAM/CMOS is presented for language recognition. Multiplication-addition-permutation (MAP), the central operations of HD computing, are experimentally demonstrated on 4-layer 3D VRRAM/FinFET as non-volatile in-memory MAP kernels. Extensive cycle-to-cycle (up to 1012 cycles) and wafer-level device-to-device (256 RRAMs) experiments are performed to validate reproducibility and robustness. For 28-nm node, the 3D in-memory architecture reduces total energy consumption by 52.2% with 412 times less area compared with LP digital design (using registers as memory), owing to the energy-efficient VRRAM MAP kernels and dense connectivity. Meanwhile, the system trained with 21 samples texts achieves 90.4% accuracy recognizing 21 European languages on 21,000 test sentences. Hard-error analysis shows the HD architecture is amazingly resilient to RRAM endurance failures, making the use of various types of RRAMs/CBRAMs (1k ∼ 10M endurance) feasible.
三维VRRAM内存内核的超维计算:节能、容错语言识别的设备架构协同设计
从少数例子中学习的能力,即所谓的一次性学习,是人类认知的一个标志。超高维计算(HD)是一种受大脑启发的计算框架,能够使用高维随机二进制向量进行一次学习。提出了基于三维VRRAM/CMOS的语言识别高清认知计算系统的器件架构协同设计。采用4层三维VRRAM/FinFET作为非易失性内存MAP内核,实验验证了HD计算的核心运算——乘法-加法-置换(MAP)。进行了广泛的周期对周期(多达1012周期)和晶圆级器件对器件(256 rram)实验,以验证可重复性和稳健性。对于28纳米节点,3D内存架构与LP数字设计(使用寄存器作为内存)相比,由于节能的VRRAM MAP内核和密集的连接,总能耗降低了52.2%,面积减少了412倍。同时,用21个样本文本训练的系统在21,000个测试句子中识别21种欧洲语言的准确率达到90.4%。硬错误分析表明,HD架构对RRAM耐久性故障具有惊人的弹性,使得使用各种类型的RRAM / cbram (1k ~ 10M耐久性)是可行的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信