{"title":"虚拟环境下深度学习GPU性能评估","authors":"R. Radhakrishnan, Y. Varma, Uday Kurkure","doi":"10.1109/HPCS48598.2019.9188098","DOIUrl":null,"url":null,"abstract":"Deep Learning (DL) is the fastest growing high performance data center class workload today. Deep learning algorithms render themselves well to taking advantage of GPU parallelism, therefore GPGPU acceleration is a mainstay of the DL computing infrastructure. In this paper we evaluate virtualized GPU performance based on training of state-of-the art deep learning models. We find that there is a correlation between the amount of I/O traffic generated in the deep learning training workload and the efficiency of GPGPU performance in virtualized environments. We show that one can achieve high efficiency when using GPGPUs in virtualized and networkattached multi-GPU environments to perform highly computeintensive workloads.","PeriodicalId":371856,"journal":{"name":"2019 International Conference on High Performance Computing & Simulation (HPCS)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluating GPU Performance for Deep Learning Workloads in Virtualized Environment\",\"authors\":\"R. Radhakrishnan, Y. Varma, Uday Kurkure\",\"doi\":\"10.1109/HPCS48598.2019.9188098\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep Learning (DL) is the fastest growing high performance data center class workload today. Deep learning algorithms render themselves well to taking advantage of GPU parallelism, therefore GPGPU acceleration is a mainstay of the DL computing infrastructure. In this paper we evaluate virtualized GPU performance based on training of state-of-the art deep learning models. We find that there is a correlation between the amount of I/O traffic generated in the deep learning training workload and the efficiency of GPGPU performance in virtualized environments. We show that one can achieve high efficiency when using GPGPUs in virtualized and networkattached multi-GPU environments to perform highly computeintensive workloads.\",\"PeriodicalId\":371856,\"journal\":{\"name\":\"2019 International Conference on High Performance Computing & Simulation (HPCS)\",\"volume\":\"82 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 International Conference on High Performance Computing & Simulation (HPCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HPCS48598.2019.9188098\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on High Performance Computing & Simulation (HPCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPCS48598.2019.9188098","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Evaluating GPU Performance for Deep Learning Workloads in Virtualized Environment
Deep Learning (DL) is the fastest growing high performance data center class workload today. Deep learning algorithms render themselves well to taking advantage of GPU parallelism, therefore GPGPU acceleration is a mainstay of the DL computing infrastructure. In this paper we evaluate virtualized GPU performance based on training of state-of-the art deep learning models. We find that there is a correlation between the amount of I/O traffic generated in the deep learning training workload and the efficiency of GPGPU performance in virtualized environments. We show that one can achieve high efficiency when using GPGPUs in virtualized and networkattached multi-GPU environments to perform highly computeintensive workloads.