Sparse Deep Neural Network Inference Using Different Programming Models

Hyungro Lee, Milan Jain, Sayan Ghosh
{"title":"Sparse Deep Neural Network Inference Using Different Programming Models","authors":"Hyungro Lee, Milan Jain, Sayan Ghosh","doi":"10.1109/HPEC55821.2022.9926362","DOIUrl":null,"url":null,"abstract":"Sparse deep neural networks have gained increasing attention recently in achieving speedups on inference with reduced memory footprints. Real-world applications often have to deal with sparse data and irregularities in the computations, yet a wide variety of Deep Neural Network (DNN) tasks remain dense without exploiting the advantages of sparsity in networks. Recent works presented in MIT/IEEE/Amazon GraphChallenge have demonstrated significant speedups and various techniques. Still, we find that there is limited investigation of the impact of various Python and C/C++ based programming models to explore new opportunities for the general cases. In this work, we provide extensive quantitative evaluations of various contemporary GPGPU programming models such as Cupy, Cuda CUSPARSE, and Openmp in the context of Sparse Deep Neural Network (SPDNN) implementations (derived from the Graph Challenge reference serial code) on single and multiple GPUs from NVIDIA DGX-A100 40GB/80GB platforms.","PeriodicalId":200071,"journal":{"name":"2022 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE High Performance Extreme Computing Conference (HPEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPEC55821.2022.9926362","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Sparse deep neural networks have gained increasing attention recently in achieving speedups on inference with reduced memory footprints. Real-world applications often have to deal with sparse data and irregularities in the computations, yet a wide variety of Deep Neural Network (DNN) tasks remain dense without exploiting the advantages of sparsity in networks. Recent works presented in MIT/IEEE/Amazon GraphChallenge have demonstrated significant speedups and various techniques. Still, we find that there is limited investigation of the impact of various Python and C/C++ based programming models to explore new opportunities for the general cases. In this work, we provide extensive quantitative evaluations of various contemporary GPGPU programming models such as Cupy, Cuda CUSPARSE, and Openmp in the context of Sparse Deep Neural Network (SPDNN) implementations (derived from the Graph Challenge reference serial code) on single and multiple GPUs from NVIDIA DGX-A100 40GB/80GB platforms.
基于不同规划模型的稀疏深度神经网络推理
近年来,稀疏深度神经网络在减少内存占用的情况下获得了越来越多的关注。现实世界的应用通常必须处理稀疏数据和计算中的不规则性,然而各种各样的深度神经网络(DNN)任务在没有利用网络稀疏性优势的情况下仍然是密集的。最近在MIT/IEEE/Amazon GraphChallenge上发表的作品展示了显著的加速和各种技术。尽管如此,我们发现对于各种基于Python和C/ c++的编程模型的影响的调查有限,无法为一般情况探索新的机会。在这项工作中,我们在NVIDIA DGX-A100 40GB/80GB平台的单个和多个gpu上,在稀疏深度神经网络(SPDNN)实现(源自Graph Challenge参考串行代码)的背景下,对各种当代GPGPU编程模型(如Cupy, Cuda CUSPARSE和Openmp)进行了广泛的定量评估。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信