{"title":"支持cuda的Hadoop集群用于稀疏矩阵向量乘法","authors":"M. Reza, Aman Sinha, Rajkumar Nag, P. Mohanty","doi":"10.1109/ReTIS.2015.7232872","DOIUrl":null,"url":null,"abstract":"Compute Unified Device Architecture (CUDA) is an architecture and programming model that allows leveraging the high compute-intensive processing power of the Graphical Processing Units (GPUs) to perform general, non-graphical tasks in a massively parallel manner. Hadoop is an open-source software framework that has its own file system, the Hadoop Distributed File System (HDFS), and its own programming model, the Map Reduce, in order to accomplish the tasks of storage of very large amount of data and their fast processing in a distributed manner in a cluster of inexpensive hardware. This paper presents a model and implementation of a Hadoop-CUDA Hybrid approach to perform Sparse Matrix Vector Multiplication (SpMV) of very large matrices in a very high performing manner. Hadoop is used for splitting the input matrix into smaller sub-matrices, storing them on individual data nodes and then invoking the required CUDA kernels on the individual GPU-possessing cluster nodes. The original SpMV is done using CUDA. Such an implementation has been seen to improve the performance of the SpMV operation over very large matrices by speedup of around 1.4 in comparison to non-Hadoop, single-GPU CUDA implementation.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"CUDA-enabled Hadoop cluster for Sparse Matrix Vector Multiplication\",\"authors\":\"M. Reza, Aman Sinha, Rajkumar Nag, P. Mohanty\",\"doi\":\"10.1109/ReTIS.2015.7232872\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Compute Unified Device Architecture (CUDA) is an architecture and programming model that allows leveraging the high compute-intensive processing power of the Graphical Processing Units (GPUs) to perform general, non-graphical tasks in a massively parallel manner. Hadoop is an open-source software framework that has its own file system, the Hadoop Distributed File System (HDFS), and its own programming model, the Map Reduce, in order to accomplish the tasks of storage of very large amount of data and their fast processing in a distributed manner in a cluster of inexpensive hardware. This paper presents a model and implementation of a Hadoop-CUDA Hybrid approach to perform Sparse Matrix Vector Multiplication (SpMV) of very large matrices in a very high performing manner. Hadoop is used for splitting the input matrix into smaller sub-matrices, storing them on individual data nodes and then invoking the required CUDA kernels on the individual GPU-possessing cluster nodes. The original SpMV is done using CUDA. Such an implementation has been seen to improve the performance of the SpMV operation over very large matrices by speedup of around 1.4 in comparison to non-Hadoop, single-GPU CUDA implementation.\",\"PeriodicalId\":161306,\"journal\":{\"name\":\"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-07-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ReTIS.2015.7232872\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ReTIS.2015.7232872","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
CUDA-enabled Hadoop cluster for Sparse Matrix Vector Multiplication
Compute Unified Device Architecture (CUDA) is an architecture and programming model that allows leveraging the high compute-intensive processing power of the Graphical Processing Units (GPUs) to perform general, non-graphical tasks in a massively parallel manner. Hadoop is an open-source software framework that has its own file system, the Hadoop Distributed File System (HDFS), and its own programming model, the Map Reduce, in order to accomplish the tasks of storage of very large amount of data and their fast processing in a distributed manner in a cluster of inexpensive hardware. This paper presents a model and implementation of a Hadoop-CUDA Hybrid approach to perform Sparse Matrix Vector Multiplication (SpMV) of very large matrices in a very high performing manner. Hadoop is used for splitting the input matrix into smaller sub-matrices, storing them on individual data nodes and then invoking the required CUDA kernels on the individual GPU-possessing cluster nodes. The original SpMV is done using CUDA. Such an implementation has been seen to improve the performance of the SpMV operation over very large matrices by speedup of around 1.4 in comparison to non-Hadoop, single-GPU CUDA implementation.