用新颖的矩阵压缩格式加速图神经网络

João N. F. Alves, Samir Moustafa, Siegfried Benkner, Alexandre P. Francisco, Wilfried N. Gansterer, Luís M. S. Russo
{"title":"用新颖的矩阵压缩格式加速图神经网络","authors":"João N. F. Alves, Samir Moustafa, Siegfried Benkner, Alexandre P. Francisco, Wilfried N. Gansterer, Luís M. S. Russo","doi":"arxiv-2409.02208","DOIUrl":null,"url":null,"abstract":"The inference and training stages of Graph Neural Networks (GNNs) are often\ndominated by the time required to compute a long sequence of matrix\nmultiplications between the sparse graph adjacency matrix and its embedding. To\naccelerate these stages, we first propose the Compressed Binary Matrix (CBM)\nstorage format to succinctly represent the binary adjacency matrix of an\nunweighted graph. Then, we show how to generalize this representation to\nnormalized adjacency matrices of unweighted graphs which arise in the context\nof GNNs. Finally, we develop efficient matrix multiplication kernels based on\nthis compressed representation. The matrix multiplication kernels proposed in\nthis work never require more scalar operations than classic sparse matrix\nmultiplication algorithms. Experimental evaluation shows that the matrix\nmultiplication strategies proposed outperform the current state-of-the-art\nimplementations provided by Intel MKL, achieving speedups close to 5$\\times$.\nFurthermore, our optimized matrix-multiplication strategies accelerated the\ninference time of a GNN by up to $3\\times$.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"21 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Accelerating Graph Neural Networks with a Novel Matrix Compression Format\",\"authors\":\"João N. F. Alves, Samir Moustafa, Siegfried Benkner, Alexandre P. Francisco, Wilfried N. Gansterer, Luís M. S. Russo\",\"doi\":\"arxiv-2409.02208\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The inference and training stages of Graph Neural Networks (GNNs) are often\\ndominated by the time required to compute a long sequence of matrix\\nmultiplications between the sparse graph adjacency matrix and its embedding. To\\naccelerate these stages, we first propose the Compressed Binary Matrix (CBM)\\nstorage format to succinctly represent the binary adjacency matrix of an\\nunweighted graph. Then, we show how to generalize this representation to\\nnormalized adjacency matrices of unweighted graphs which arise in the context\\nof GNNs. Finally, we develop efficient matrix multiplication kernels based on\\nthis compressed representation. The matrix multiplication kernels proposed in\\nthis work never require more scalar operations than classic sparse matrix\\nmultiplication algorithms. Experimental evaluation shows that the matrix\\nmultiplication strategies proposed outperform the current state-of-the-art\\nimplementations provided by Intel MKL, achieving speedups close to 5$\\\\times$.\\nFurthermore, our optimized matrix-multiplication strategies accelerated the\\ninference time of a GNN by up to $3\\\\times$.\",\"PeriodicalId\":501525,\"journal\":{\"name\":\"arXiv - CS - Data Structures and Algorithms\",\"volume\":\"21 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Data Structures and Algorithms\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.02208\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Data Structures and Algorithms","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.02208","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在图神经网络(GNN)的推理和训练阶段,计算稀疏图邻接矩阵及其嵌入之间一长串矩阵乘法所需的时间往往占主导地位。为了加速这些阶段,我们首先提出了压缩二进制矩阵(CBM)存储格式,以简洁地表示无权重图的二进制邻接矩阵。然后,我们展示了如何将这种表示法推广到无权重图的规范化邻接矩阵中,这在 GNN 的背景下会出现。最后,我们基于这种压缩表示法开发了高效的矩阵乘法内核。与经典的稀疏矩阵乘法算法相比,本研究提出的矩阵乘法内核不需要更多的标量运算。实验评估表明,我们提出的矩阵乘法策略优于英特尔 MKL 提供的当前最先进的实现方法,速度提高了近 5 倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Accelerating Graph Neural Networks with a Novel Matrix Compression Format
The inference and training stages of Graph Neural Networks (GNNs) are often dominated by the time required to compute a long sequence of matrix multiplications between the sparse graph adjacency matrix and its embedding. To accelerate these stages, we first propose the Compressed Binary Matrix (CBM) storage format to succinctly represent the binary adjacency matrix of an unweighted graph. Then, we show how to generalize this representation to normalized adjacency matrices of unweighted graphs which arise in the context of GNNs. Finally, we develop efficient matrix multiplication kernels based on this compressed representation. The matrix multiplication kernels proposed in this work never require more scalar operations than classic sparse matrix multiplication algorithms. Experimental evaluation shows that the matrix multiplication strategies proposed outperform the current state-of-the-art implementations provided by Intel MKL, achieving speedups close to 5$\times$. Furthermore, our optimized matrix-multiplication strategies accelerated the inference time of a GNN by up to $3\times$.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信