{"title":"A GPU Inference System Scheduling Algorithm with Asynchronous Data Transfer","authors":"Qin Zhang, L. Zha, Xiaohua Wan, Boqun Cheng","doi":"10.1109/IPDPSW.2019.00083","DOIUrl":null,"url":null,"abstract":"With the rapid expansion of application range, Deep-Learning has increasingly become an indispensable practical method to solve problems in various industries. In different application scenarios, especially in high concurrency areas such as search and recommendation, deep learning inference system is required to have high throughput and low latency, which can not be easily obtained at the same time. In this paper, we build a model to quantify the relationship between concurrency, throughput and job latency. Then we implement a GPU scheduling algorithm for inference jobs in deep learning inference system based on the model. The algorithm predicts the completion time of batch jobs being executed, and reasonably chooses the batch size of the next batch jobs according to the concurrency and upload data to GPU memory ahead of time. So that the system can hide the data transfer delay of GPU and achieve the minimum job latency under the premise of meetingthethroughputrequirements.Experimentsshowthatthe proposed GPU asynchronous data transfer scheduling algorithm improves throughput by 9% compared with the traditional synchronous algorithm, reduces the latency by 3%-76% under different concurrency, and can better suppress the job latency fluctuation caused by concurrency changing.","PeriodicalId":292054,"journal":{"name":"2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPDPSW.2019.00083","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
With the rapid expansion of application range, Deep-Learning has increasingly become an indispensable practical method to solve problems in various industries. In different application scenarios, especially in high concurrency areas such as search and recommendation, deep learning inference system is required to have high throughput and low latency, which can not be easily obtained at the same time. In this paper, we build a model to quantify the relationship between concurrency, throughput and job latency. Then we implement a GPU scheduling algorithm for inference jobs in deep learning inference system based on the model. The algorithm predicts the completion time of batch jobs being executed, and reasonably chooses the batch size of the next batch jobs according to the concurrency and upload data to GPU memory ahead of time. So that the system can hide the data transfer delay of GPU and achieve the minimum job latency under the premise of meetingthethroughputrequirements.Experimentsshowthatthe proposed GPU asynchronous data transfer scheduling algorithm improves throughput by 9% compared with the traditional synchronous algorithm, reduces the latency by 3%-76% under different concurrency, and can better suppress the job latency fluctuation caused by concurrency changing.