{"title":"基于不同规划模型的稀疏深度神经网络推理","authors":"Hyungro Lee, Milan Jain, Sayan Ghosh","doi":"10.1109/HPEC55821.2022.9926362","DOIUrl":null,"url":null,"abstract":"Sparse deep neural networks have gained increasing attention recently in achieving speedups on inference with reduced memory footprints. Real-world applications often have to deal with sparse data and irregularities in the computations, yet a wide variety of Deep Neural Network (DNN) tasks remain dense without exploiting the advantages of sparsity in networks. Recent works presented in MIT/IEEE/Amazon GraphChallenge have demonstrated significant speedups and various techniques. Still, we find that there is limited investigation of the impact of various Python and C/C++ based programming models to explore new opportunities for the general cases. In this work, we provide extensive quantitative evaluations of various contemporary GPGPU programming models such as Cupy, Cuda CUSPARSE, and Openmp in the context of Sparse Deep Neural Network (SPDNN) implementations (derived from the Graph Challenge reference serial code) on single and multiple GPUs from NVIDIA DGX-A100 40GB/80GB platforms.","PeriodicalId":200071,"journal":{"name":"2022 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Sparse Deep Neural Network Inference Using Different Programming Models\",\"authors\":\"Hyungro Lee, Milan Jain, Sayan Ghosh\",\"doi\":\"10.1109/HPEC55821.2022.9926362\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sparse deep neural networks have gained increasing attention recently in achieving speedups on inference with reduced memory footprints. Real-world applications often have to deal with sparse data and irregularities in the computations, yet a wide variety of Deep Neural Network (DNN) tasks remain dense without exploiting the advantages of sparsity in networks. Recent works presented in MIT/IEEE/Amazon GraphChallenge have demonstrated significant speedups and various techniques. Still, we find that there is limited investigation of the impact of various Python and C/C++ based programming models to explore new opportunities for the general cases. In this work, we provide extensive quantitative evaluations of various contemporary GPGPU programming models such as Cupy, Cuda CUSPARSE, and Openmp in the context of Sparse Deep Neural Network (SPDNN) implementations (derived from the Graph Challenge reference serial code) on single and multiple GPUs from NVIDIA DGX-A100 40GB/80GB platforms.\",\"PeriodicalId\":200071,\"journal\":{\"name\":\"2022 IEEE High Performance Extreme Computing Conference (HPEC)\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE High Performance Extreme Computing Conference (HPEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HPEC55821.2022.9926362\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE High Performance Extreme Computing Conference (HPEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPEC55821.2022.9926362","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
摘要
近年来,稀疏深度神经网络在减少内存占用的情况下获得了越来越多的关注。现实世界的应用通常必须处理稀疏数据和计算中的不规则性,然而各种各样的深度神经网络(DNN)任务在没有利用网络稀疏性优势的情况下仍然是密集的。最近在MIT/IEEE/Amazon GraphChallenge上发表的作品展示了显著的加速和各种技术。尽管如此,我们发现对于各种基于Python和C/ c++的编程模型的影响的调查有限,无法为一般情况探索新的机会。在这项工作中,我们在NVIDIA DGX-A100 40GB/80GB平台的单个和多个gpu上,在稀疏深度神经网络(SPDNN)实现(源自Graph Challenge参考串行代码)的背景下,对各种当代GPGPU编程模型(如Cupy, Cuda CUSPARSE和Openmp)进行了广泛的定量评估。
Sparse Deep Neural Network Inference Using Different Programming Models
Sparse deep neural networks have gained increasing attention recently in achieving speedups on inference with reduced memory footprints. Real-world applications often have to deal with sparse data and irregularities in the computations, yet a wide variety of Deep Neural Network (DNN) tasks remain dense without exploiting the advantages of sparsity in networks. Recent works presented in MIT/IEEE/Amazon GraphChallenge have demonstrated significant speedups and various techniques. Still, we find that there is limited investigation of the impact of various Python and C/C++ based programming models to explore new opportunities for the general cases. In this work, we provide extensive quantitative evaluations of various contemporary GPGPU programming models such as Cupy, Cuda CUSPARSE, and Openmp in the context of Sparse Deep Neural Network (SPDNN) implementations (derived from the Graph Challenge reference serial code) on single and multiple GPUs from NVIDIA DGX-A100 40GB/80GB platforms.