PF-GEMV: Utilization maximizing architecture in fast matrix–vector multiplication for GPT-2 inference

IF 1.3 4区 计算机科学 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC
Hyeji Kim, Yeongmin Lee, Chun-Gi Lyuh
{"title":"PF-GEMV: Utilization maximizing architecture in fast matrix–vector multiplication for GPT-2 inference","authors":"Hyeji Kim,&nbsp;Yeongmin Lee,&nbsp;Chun-Gi Lyuh","doi":"10.4218/etrij.2024-0111","DOIUrl":null,"url":null,"abstract":"<p>Owing to the widespread advancement of transformer-based artificial neural networks, artificial intelligence (AI) processors are now required to perform matrix–vector multiplication in addition to the conventional matrix–matrix multiplication. However, current AI processor architectures are optimized for general matrix–matrix multiplications (GEMMs), which causes significant throughput degradation when processing general matrix–vector multiplications (GEMVs). In this study, we proposed a port-folding GEMV (PF-GEMV) scheme employing multiformat and low-precision techniques while reusing an outer product-based processor optimized for conventional GEMM operations. This approach achieves 93.7% utilization in GEMV operations with an 8-bit format on an 8 \n<span></span><math>\n <mo>×</mo></math> 8 processor, thus resulting in a 7.5 \n<span></span><math>\n <mo>×</mo></math> increase in throughput compared with that of the original scheme. Furthermore, when applied to the matrix operation of the GPT-2 large model, an increase in speed by 7 \n<span></span><math>\n <mo>×</mo></math> is achieved in single-batch inferences.</p>","PeriodicalId":11901,"journal":{"name":"ETRI Journal","volume":"46 5","pages":"817-828"},"PeriodicalIF":1.3000,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.4218/etrij.2024-0111","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ETRI Journal","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.4218/etrij.2024-0111","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Owing to the widespread advancement of transformer-based artificial neural networks, artificial intelligence (AI) processors are now required to perform matrix–vector multiplication in addition to the conventional matrix–matrix multiplication. However, current AI processor architectures are optimized for general matrix–matrix multiplications (GEMMs), which causes significant throughput degradation when processing general matrix–vector multiplications (GEMVs). In this study, we proposed a port-folding GEMV (PF-GEMV) scheme employing multiformat and low-precision techniques while reusing an outer product-based processor optimized for conventional GEMM operations. This approach achieves 93.7% utilization in GEMV operations with an 8-bit format on an 8 × 8 processor, thus resulting in a 7.5 × increase in throughput compared with that of the original scheme. Furthermore, when applied to the matrix operation of the GPT-2 large model, an increase in speed by 7 × is achieved in single-batch inferences.

Abstract Image

PF-GEMV:用于 GPT-2 推理的快速矩阵向量乘法中的利用率最大化架构
由于基于变压器的人工神经网络的广泛发展,人工智能(AI)处理器现在除了需要执行传统的矩阵-矩阵乘法外,还需要执行矩阵-矢量乘法。然而,目前的人工智能处理器架构针对通用矩阵-矩阵乘法(GEMM)进行了优化,这导致在处理通用矩阵-矢量乘法(GEMV)时吞吐量明显下降。在这项研究中,我们提出了一种端口折叠 GEMV(PF-GEMV)方案,它采用了多格式和低精度技术,同时重新使用了针对传统 GEMM 运算优化的基于外积的处理器。在 8 × 8 处理器上进行 8 位格式的 GEMV 运算时,这种方法实现了 93.7% 的利用率,因此与原始方案相比,吞吐量提高了 7.5 倍。此外,当应用于 GPT-2 大型模型的矩阵运算时,单批推断的速度提高了 7 倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
ETRI Journal
ETRI Journal 工程技术-电信学
CiteScore
4.00
自引率
7.10%
发文量
98
审稿时长
6.9 months
期刊介绍: ETRI Journal is an international, peer-reviewed multidisciplinary journal published bimonthly in English. The main focus of the journal is to provide an open forum to exchange innovative ideas and technology in the fields of information, telecommunications, and electronics. Key topics of interest include high-performance computing, big data analytics, cloud computing, multimedia technology, communication networks and services, wireless communications and mobile computing, material and component technology, as well as security. With an international editorial committee and experts from around the world as reviewers, ETRI Journal publishes high-quality research papers on the latest and best developments from the global community.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信