Stratix 10 NX Architecture and Applications

M. Langhammer, E. Nurvitadhi, B. Pasca, Sergey Gribok
{"title":"Stratix 10 NX Architecture and Applications","authors":"M. Langhammer, E. Nurvitadhi, B. Pasca, Sergey Gribok","doi":"10.1145/3431920.3439293","DOIUrl":null,"url":null,"abstract":"The advent of AI has driven the adoption of high density low precision arithmetic on FPGAs. This has resulted in new methods in mapping both arithmetic functions as well as dataflows onto the fabric, as well as some changes to the embedded DSP Blocks. Technologies outside of the FPGA realm have also evolved, such as the addition of tensor structures for GPUs, and also the introduction of numerous AI ASSPs, all of which have a higher claimed performance and efficiency than current FPGAs. In this paper we will introduce the Stratix 10 NX device (NX), which is a variant of FPGA specifically optimized for the AI application space. In addition to the computational capabilities of the standard programmable soft logic fabric, a new type of DSP Block provides the dense arrays of low precision multipliers typically used in AI implementations. The architecture of the block is tuned for the common matrix-matrix or vector-matrix multiplications in AI, with capabilities designed to work efficiently for both small and large matrix sizes. The base precisions are INT8 and INT4, along with shared exponent support for support block floating point FP16 and FP12 numerics. All additions/accumulations can be done in INT32 or IEEE754 single precision floating point (FP32), and multiple blocks can be cascaded together to support larger matrices. We will also describe methods by which the smaller precision multipliers can be aggregated to create larger multiplier that are more applicable to standard signal processing requirements. In terms of overall compute throughput, Stratix 10 NX achieves 143 INT8/FP16 TOPs/FLOPs, or 286 INT4/FP12 TOPS/FLOPs at 600MHz. Depending on the configuration, power efficiency is in the range of 1-4 TOPs or TFLOPs/W.","PeriodicalId":386071,"journal":{"name":"The 2021 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"23","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2021 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3431920.3439293","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 23

Abstract

The advent of AI has driven the adoption of high density low precision arithmetic on FPGAs. This has resulted in new methods in mapping both arithmetic functions as well as dataflows onto the fabric, as well as some changes to the embedded DSP Blocks. Technologies outside of the FPGA realm have also evolved, such as the addition of tensor structures for GPUs, and also the introduction of numerous AI ASSPs, all of which have a higher claimed performance and efficiency than current FPGAs. In this paper we will introduce the Stratix 10 NX device (NX), which is a variant of FPGA specifically optimized for the AI application space. In addition to the computational capabilities of the standard programmable soft logic fabric, a new type of DSP Block provides the dense arrays of low precision multipliers typically used in AI implementations. The architecture of the block is tuned for the common matrix-matrix or vector-matrix multiplications in AI, with capabilities designed to work efficiently for both small and large matrix sizes. The base precisions are INT8 and INT4, along with shared exponent support for support block floating point FP16 and FP12 numerics. All additions/accumulations can be done in INT32 or IEEE754 single precision floating point (FP32), and multiple blocks can be cascaded together to support larger matrices. We will also describe methods by which the smaller precision multipliers can be aggregated to create larger multiplier that are more applicable to standard signal processing requirements. In terms of overall compute throughput, Stratix 10 NX achieves 143 INT8/FP16 TOPs/FLOPs, or 286 INT4/FP12 TOPS/FLOPs at 600MHz. Depending on the configuration, power efficiency is in the range of 1-4 TOPs or TFLOPs/W.
Stratix 10nx架构与应用
人工智能的出现推动了fpga上高密度低精度算法的采用。这导致了将算术函数和数据流映射到结构上的新方法,以及对嵌入式DSP块的一些更改。FPGA领域之外的技术也得到了发展,例如gpu的张量结构的增加,以及许多AI assp的引入,所有这些都比当前的FPGA具有更高的性能和效率。在本文中,我们将介绍Stratix 10 NX器件(NX),这是专门针对AI应用空间优化的FPGA变体。除了标准可编程软逻辑结构的计算能力外,新型DSP Block还提供了通常用于人工智能实现的低精度乘法器的密集阵列。该块的架构针对人工智能中常见的矩阵-矩阵或向量-矩阵乘法进行了调整,其功能设计可用于小型和大型矩阵的高效工作。基本精度为INT8和INT4,以及支持块浮点FP16和FP12数字的共享指数支持。所有的加法/累加都可以在INT32或IEEE754单精度浮点(FP32)中完成,多个块可以级联在一起以支持更大的矩阵。我们还将描述将较小的精度乘法器聚合成更适用于标准信号处理要求的较大乘法器的方法。在整体计算吞吐量方面,Stratix 10 NX在600MHz时可达到143个INT8/FP16 TOPs/FLOPs,或286个INT4/FP12 TOPs/FLOPs。根据不同的配置,功率效率范围为1 ~ 4tops或TFLOPs/W。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信