{"title":"基于CUDA和消息传递接口的多GPU机器学习计算","authors":"Bhagirath, Neetu Mittal, Sushil Kumar","doi":"10.1109/PEEIC47157.2019.8976714","DOIUrl":null,"url":null,"abstract":"In this paper, we provide our efforts to implement machine learning modeling on commodity hardware such as general purpose graphical processing unit (GPU) and multiple GPU's connected with message passing interface (MPI). We consider risk models that involve a large number of iterations to come up with a probability of defaults for any credit account. This is computed based on the Markov Chain analysis. We discuss data structures and efficient implementation of machine learning models on the GPU platform. Idea is to leverage the power of fast GPU RAM and thousands of GPU core for fasten the execution process and reduce overall time. When we increase the number of GPU in our experiment, it also increases the programming complexity and increase the number of I/O which leads to increase overall turnaround time. We benchmarked the scalability and performance of our implementation with respect to size of the data. Performing model computations on huge amount o.f data is a compute intensive and costly task. We purpose four combinations of CPU, GPU and MPI for machine learning modeling. Experiment on real data show that to training machine leaning model on single GPU outperform as compare to CPu, Multiple GPU and GPU connected with MPI","PeriodicalId":203504,"journal":{"name":"2019 2nd International Conference on Power Energy, Environment and Intelligent Control (PEEIC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Machine Learning Computation on Multiple GPU's using CUDA and Message Passing Interface\",\"authors\":\"Bhagirath, Neetu Mittal, Sushil Kumar\",\"doi\":\"10.1109/PEEIC47157.2019.8976714\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we provide our efforts to implement machine learning modeling on commodity hardware such as general purpose graphical processing unit (GPU) and multiple GPU's connected with message passing interface (MPI). We consider risk models that involve a large number of iterations to come up with a probability of defaults for any credit account. This is computed based on the Markov Chain analysis. We discuss data structures and efficient implementation of machine learning models on the GPU platform. Idea is to leverage the power of fast GPU RAM and thousands of GPU core for fasten the execution process and reduce overall time. When we increase the number of GPU in our experiment, it also increases the programming complexity and increase the number of I/O which leads to increase overall turnaround time. We benchmarked the scalability and performance of our implementation with respect to size of the data. Performing model computations on huge amount o.f data is a compute intensive and costly task. We purpose four combinations of CPU, GPU and MPI for machine learning modeling. Experiment on real data show that to training machine leaning model on single GPU outperform as compare to CPu, Multiple GPU and GPU connected with MPI\",\"PeriodicalId\":203504,\"journal\":{\"name\":\"2019 2nd International Conference on Power Energy, Environment and Intelligent Control (PEEIC)\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 2nd International Conference on Power Energy, Environment and Intelligent Control (PEEIC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/PEEIC47157.2019.8976714\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 2nd International Conference on Power Energy, Environment and Intelligent Control (PEEIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PEEIC47157.2019.8976714","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Machine Learning Computation on Multiple GPU's using CUDA and Message Passing Interface
In this paper, we provide our efforts to implement machine learning modeling on commodity hardware such as general purpose graphical processing unit (GPU) and multiple GPU's connected with message passing interface (MPI). We consider risk models that involve a large number of iterations to come up with a probability of defaults for any credit account. This is computed based on the Markov Chain analysis. We discuss data structures and efficient implementation of machine learning models on the GPU platform. Idea is to leverage the power of fast GPU RAM and thousands of GPU core for fasten the execution process and reduce overall time. When we increase the number of GPU in our experiment, it also increases the programming complexity and increase the number of I/O which leads to increase overall turnaround time. We benchmarked the scalability and performance of our implementation with respect to size of the data. Performing model computations on huge amount o.f data is a compute intensive and costly task. We purpose four combinations of CPU, GPU and MPI for machine learning modeling. Experiment on real data show that to training machine leaning model on single GPU outperform as compare to CPu, Multiple GPU and GPU connected with MPI