Hamamu: Specializing FPGAs for ML Applications by Adding Hard Matrix Multiplier Blocks

Aman Arora, Zhigang Wei, L. John
{"title":"Hamamu: Specializing FPGAs for ML Applications by Adding Hard Matrix Multiplier Blocks","authors":"Aman Arora, Zhigang Wei, L. John","doi":"10.1109/ASAP49362.2020.00018","DOIUrl":null,"url":null,"abstract":"Designing efficient hardware for accelerating artificial intelligence (AI) and machine learning (ML) applications is a major challenge. Rapidly changing algorithms and neural network architectures make FPGA based designs an attractive solution. But the generic building blocks available in current FPGAs (Logic Blocks (LBs), multipliers, DSP blocks) limit the acceleration that can be achieved. We propose Hamamu, a modification to the current FPGA architecture that makes FPGAs specialized for ML applications. Specifically, we propose adding hard matrix multiplier blocks (matmuls) into the FPGA fabric. These matmuls are implemented using systolic arrays of MACs (Multiply-And-Accumulate) and can be connected using programmable direct interconnect between neighboring matmuls to make larger systolic matrix multipliers. We explore various matmul sizes ($2\\times 2\\times 2$, $4\\times 4\\times 4$, $8\\times 8\\times 8$, $16\\times 16\\times 16$) and various strategies to place these blocks on the FPGA (Columnar, Surround, Hybrid). We find that providing $4\\times 4\\times 4$ hard matrix multiplier blocks in an FPGA speeds up neural networks from MLPerf benchmarks by up to $\\sim 3.9x$, compared to a Stratix-10 like FPGA with equal number of MACs, same MAC architecture and high DSP:LB ratio. Although the flexibility of the FPGA will reduce for non-ML applications, an FPGA with hard matrix multipliers is a faster, and more area efficient hardware accelerator for ML applications, compared to current FPGAs.","PeriodicalId":375691,"journal":{"name":"2020 IEEE 31st International Conference on Application-specific Systems, Architectures and Processors (ASAP)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 31st International Conference on Application-specific Systems, Architectures and Processors (ASAP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASAP49362.2020.00018","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15

Abstract

Designing efficient hardware for accelerating artificial intelligence (AI) and machine learning (ML) applications is a major challenge. Rapidly changing algorithms and neural network architectures make FPGA based designs an attractive solution. But the generic building blocks available in current FPGAs (Logic Blocks (LBs), multipliers, DSP blocks) limit the acceleration that can be achieved. We propose Hamamu, a modification to the current FPGA architecture that makes FPGAs specialized for ML applications. Specifically, we propose adding hard matrix multiplier blocks (matmuls) into the FPGA fabric. These matmuls are implemented using systolic arrays of MACs (Multiply-And-Accumulate) and can be connected using programmable direct interconnect between neighboring matmuls to make larger systolic matrix multipliers. We explore various matmul sizes ($2\times 2\times 2$, $4\times 4\times 4$, $8\times 8\times 8$, $16\times 16\times 16$) and various strategies to place these blocks on the FPGA (Columnar, Surround, Hybrid). We find that providing $4\times 4\times 4$ hard matrix multiplier blocks in an FPGA speeds up neural networks from MLPerf benchmarks by up to $\sim 3.9x$, compared to a Stratix-10 like FPGA with equal number of MACs, same MAC architecture and high DSP:LB ratio. Although the flexibility of the FPGA will reduce for non-ML applications, an FPGA with hard matrix multipliers is a faster, and more area efficient hardware accelerator for ML applications, compared to current FPGAs.
Hamamu:通过添加硬矩阵乘法器块,专门为ML应用提供fpga
为加速人工智能(AI)和机器学习(ML)应用设计高效的硬件是一项重大挑战。快速变化的算法和神经网络架构使得基于FPGA的设计成为一个有吸引力的解决方案。但是,当前fpga中可用的通用构建块(逻辑块(LBs),乘法器,DSP块)限制了可以实现的加速。我们提出了Hamamu,这是对当前FPGA架构的修改,使FPGA专门用于ML应用。具体来说,我们建议将硬矩阵乘法器块(matmuls)添加到FPGA结构中。这些矩阵使用mac的收缩阵列(乘法累加)来实现,并且可以使用相邻矩阵之间的可编程直接互连来连接,以形成更大的收缩矩阵乘法器。我们探索了各种matl大小($2 × 2 × 2$, $4 × 4 × 4$, $8 × 8 × 8$, $16 × 16 × 16$)以及将这些块放置在FPGA上的各种策略(Columnar, Surround, Hybrid)。我们发现,与具有相同MAC数量、相同MAC架构和高DSP:LB比率的Stratix-10类FPGA相比,在FPGA中提供$4\times 4\times 4$硬矩阵乘法器块可使神经网络从MLPerf基准中加速高达$ 3.9x$。尽管FPGA的灵活性在非机器学习应用中会降低,但与目前的FPGA相比,具有硬矩阵乘法器的FPGA对于机器学习应用来说是一个更快、更高效的硬件加速器。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信