Understanding the efficiency of GPU algorithms for matrix-matrix multiplication

K. Fatahalian, J. Sugerman, P. Hanrahan
{"title":"Understanding the efficiency of GPU algorithms for matrix-matrix multiplication","authors":"K. Fatahalian, J. Sugerman, P. Hanrahan","doi":"10.1145/1058129.1058148","DOIUrl":null,"url":null,"abstract":"Utilizing graphics hardware for general purpose numerical computations has become a topic of considerable interest. The implementation of streaming algorithms, typified by highly parallel computations with little reuse of input data, has been widely explored on GPUs. We relax the streaming model's constraint on input reuse and perform an in-depth analysis of dense matrix-matrix multiplication, which reuses each element of input matrices O(n) times. Its regular data access pattern and highly parallel computational requirements suggest matrix-matrix multiplication as an obvious candidate for efficient evaluation on GPUs but, surprisingly we find even near-optimal GPU implementations are pronouncedly less efficient than current cache-aware CPU approaches. We find the key cause of this inefficiency is that the GPU can fetch less data and yet execute more arithmetic operations per clock than the CPU when both are operating out of their closest caches. The lack of high bandwidth access to cached data will impair the performance of GPU implementations of any computation featuring significant input reuse.","PeriodicalId":266180,"journal":{"name":"Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"360","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1058129.1058148","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 360

Abstract

Utilizing graphics hardware for general purpose numerical computations has become a topic of considerable interest. The implementation of streaming algorithms, typified by highly parallel computations with little reuse of input data, has been widely explored on GPUs. We relax the streaming model's constraint on input reuse and perform an in-depth analysis of dense matrix-matrix multiplication, which reuses each element of input matrices O(n) times. Its regular data access pattern and highly parallel computational requirements suggest matrix-matrix multiplication as an obvious candidate for efficient evaluation on GPUs but, surprisingly we find even near-optimal GPU implementations are pronouncedly less efficient than current cache-aware CPU approaches. We find the key cause of this inefficiency is that the GPU can fetch less data and yet execute more arithmetic operations per clock than the CPU when both are operating out of their closest caches. The lack of high bandwidth access to cached data will impair the performance of GPU implementations of any computation featuring significant input reuse.
了解矩阵-矩阵乘法的GPU算法的效率
利用图形硬件进行通用的数值计算已经成为一个非常有趣的话题。流算法的实现在gpu上得到了广泛的探索,其典型特征是高度并行计算,输入数据的重用很少。我们放宽了流模型对输入重用的约束,并深入分析了密集矩阵-矩阵乘法,它重用输入矩阵的每个元素O(n)次。它的常规数据访问模式和高度并行计算要求表明矩阵-矩阵乘法是GPU上有效评估的明显候选,但是,令人惊讶的是,我们发现即使是接近最佳的GPU实现也明显不如当前缓存感知的CPU方法效率高。我们发现这种低效率的关键原因是GPU可以获取更少的数据,但每个时钟执行的算术运算比CPU多,当两者都从最近的缓存中操作时。缺乏对缓存数据的高带宽访问将损害GPU实现具有重要输入重用的任何计算的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信