{"title":"基于可配置铁电场效应管阵列的小次学习记忆中的注意力","authors":"D. Reis, Ann Franchesca Laguna, M. Niemier, X. Hu","doi":"10.1145/3394885.3431526","DOIUrl":null,"url":null,"abstract":"Attention-in-Memory (AiM), a computing-in-memory (CiM) design, is introduced to implement the attentional layer of Memory Augmented Neural Networks (MANNs). AiM consists of a memory array based on Ferroelectric FETs (FeFET) along with CMOS peripheral circuits implementing configurable functionalities, i.e., it can be dynamically changed from a ternary content-addressable memory (TCAM) to a general-purpose (GP) CiM. When compared to state-of-the art accelerators, AiM achieves comparable end-to-end speed-up and energy for MANNs, with better accuracy (95.14% v.s. 92.21%, and 95.14% v.s. 91.98%) at iso-memory size, for a 5-way 5-shot inference task with the Omniglot dataset.","PeriodicalId":186307,"journal":{"name":"2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Attention-in-Memory for Few-Shot Learning with Configurable Ferroelectric FET Arrays\",\"authors\":\"D. Reis, Ann Franchesca Laguna, M. Niemier, X. Hu\",\"doi\":\"10.1145/3394885.3431526\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Attention-in-Memory (AiM), a computing-in-memory (CiM) design, is introduced to implement the attentional layer of Memory Augmented Neural Networks (MANNs). AiM consists of a memory array based on Ferroelectric FETs (FeFET) along with CMOS peripheral circuits implementing configurable functionalities, i.e., it can be dynamically changed from a ternary content-addressable memory (TCAM) to a general-purpose (GP) CiM. When compared to state-of-the art accelerators, AiM achieves comparable end-to-end speed-up and energy for MANNs, with better accuracy (95.14% v.s. 92.21%, and 95.14% v.s. 91.98%) at iso-memory size, for a 5-way 5-shot inference task with the Omniglot dataset.\",\"PeriodicalId\":186307,\"journal\":{\"name\":\"2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC)\",\"volume\":\"57 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3394885.3431526\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3394885.3431526","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
摘要
引入了一种内存计算(CiM)设计——内存中注意(AiM)来实现记忆增强神经网络(MANNs)的注意层。AiM由基于铁电场效应管(FeFET)的存储器阵列以及实现可配置功能的CMOS外围电路组成,即,它可以从三元内容可寻址存储器(TCAM)动态更改为通用存储器(GP) CiM。与最先进的加速器相比,AiM实现了相当的端到端加速和能量,对于使用Omniglot数据集的5路5次推理任务,在等内存大小下具有更好的准确率(95.14% vs . 92.21%和95.14% vs . 91.98%)。
Attention-in-Memory for Few-Shot Learning with Configurable Ferroelectric FET Arrays
Attention-in-Memory (AiM), a computing-in-memory (CiM) design, is introduced to implement the attentional layer of Memory Augmented Neural Networks (MANNs). AiM consists of a memory array based on Ferroelectric FETs (FeFET) along with CMOS peripheral circuits implementing configurable functionalities, i.e., it can be dynamically changed from a ternary content-addressable memory (TCAM) to a general-purpose (GP) CiM. When compared to state-of-the art accelerators, AiM achieves comparable end-to-end speed-up and energy for MANNs, with better accuracy (95.14% v.s. 92.21%, and 95.14% v.s. 91.98%) at iso-memory size, for a 5-way 5-shot inference task with the Omniglot dataset.