A 40nm 5.6TOPS/W 239GOPS/mm2 Self-Attention Processor with Sign Random Projection-based Approximation

Seong Hoon Seo, Soo-Uck Kim, Sungjun Jung, S. Kwon, Hyunseung Lee, Jae W. Lee
{"title":"A 40nm 5.6TOPS/W 239GOPS/mm2 Self-Attention Processor with Sign Random Projection-based Approximation","authors":"Seong Hoon Seo, Soo-Uck Kim, Sungjun Jung, S. Kwon, Hyunseung Lee, Jae W. Lee","doi":"10.1109/ESSCIRC55480.2022.9911343","DOIUrl":null,"url":null,"abstract":"Transformer architecture is one of the most remarkable recent breakthroughs in neural networks, achieving state-of-the-art (SOTA) performance on various natural language processing (NLP) and computer vision tasks. Self-attention is the key enabling operation for transformer-based models. However, its quadratic computational complexity to the sequence length makes this operation the major performance bottleneck for those models. Thus, we propose a novel self-attention accelerator that skips most of the computation by utilizing an approximate candidate selection algorithm. Implemented in a 40nm CMOS technology, our 5.64 mm2 chip operates at 100–600 MHz consuming 48.3-685 mW to achieve the energy and area efficiency of 0.354-5.61 TOPS/W and 239 GOPS/mm2, respectively.","PeriodicalId":168466,"journal":{"name":"ESSCIRC 2022- IEEE 48th European Solid State Circuits Conference (ESSCIRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ESSCIRC 2022- IEEE 48th European Solid State Circuits Conference (ESSCIRC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ESSCIRC55480.2022.9911343","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Transformer architecture is one of the most remarkable recent breakthroughs in neural networks, achieving state-of-the-art (SOTA) performance on various natural language processing (NLP) and computer vision tasks. Self-attention is the key enabling operation for transformer-based models. However, its quadratic computational complexity to the sequence length makes this operation the major performance bottleneck for those models. Thus, we propose a novel self-attention accelerator that skips most of the computation by utilizing an approximate candidate selection algorithm. Implemented in a 40nm CMOS technology, our 5.64 mm2 chip operates at 100–600 MHz consuming 48.3-685 mW to achieve the energy and area efficiency of 0.354-5.61 TOPS/W and 239 GOPS/mm2, respectively.
基于符号随机投影近似的40nm 5.6TOPS/W 239GOPS/mm2自注意处理器
Transformer架构是最近神经网络领域最显著的突破之一,在各种自然语言处理(NLP)和计算机视觉任务上实现了最先进的(SOTA)性能。自关注是基于变压器的模型的关键启用操作。然而,它对序列长度的二次计算复杂度使其成为这些模型的主要性能瓶颈。因此,我们提出了一种新的自关注加速器,它利用近似候选选择算法跳过大部分计算。我们的5.64 mm2芯片采用40nm CMOS技术,工作频率为100-600 MHz,功耗为48.3-685 mW,能量和面积效率分别为0.354-5.61 TOPS/W和239 GOPS/mm2。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信