{"title":"GPU并行化的随机准时到达问题","authors":"Maleen Abeydeera, S. Samaranayake","doi":"10.1109/HiPC.2014.7116896","DOIUrl":null,"url":null,"abstract":"The Stochastic On-Time Arrival (SOTA) problem has recently been studied as an alternative to traditional shortest-path formulations in situations with hard deadlines. The goal is to find a routing strategy that maximizes the probability of reaching the destination within a pre-specified time budget, with the edge weights of the graph being random variables with arbitrary distributions. While this is a practically useful formulation for vehicle routing, the commercial deployment of such methods is not currently feasible due to the high computational complexity of existing solutions. We present a parallelization strategy for improving the computation times by multiple orders of magnitude compared to the single threaded CPU implementations, using a CUDA GPU implementation. A single order of magnitude is achieved via naive parallelization of the problem, and another order of magnitude via optimal utilization of the GPU resources. We also show that the runtime can be further reduced in certain cases using dynamic thread assignment and an edge clustering method for accelerating queries with a small time budget.","PeriodicalId":337777,"journal":{"name":"2014 21st International Conference on High Performance Computing (HiPC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"GPU parallelization of the stochastic on-time arrival problem\",\"authors\":\"Maleen Abeydeera, S. Samaranayake\",\"doi\":\"10.1109/HiPC.2014.7116896\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The Stochastic On-Time Arrival (SOTA) problem has recently been studied as an alternative to traditional shortest-path formulations in situations with hard deadlines. The goal is to find a routing strategy that maximizes the probability of reaching the destination within a pre-specified time budget, with the edge weights of the graph being random variables with arbitrary distributions. While this is a practically useful formulation for vehicle routing, the commercial deployment of such methods is not currently feasible due to the high computational complexity of existing solutions. We present a parallelization strategy for improving the computation times by multiple orders of magnitude compared to the single threaded CPU implementations, using a CUDA GPU implementation. A single order of magnitude is achieved via naive parallelization of the problem, and another order of magnitude via optimal utilization of the GPU resources. We also show that the runtime can be further reduced in certain cases using dynamic thread assignment and an edge clustering method for accelerating queries with a small time budget.\",\"PeriodicalId\":337777,\"journal\":{\"name\":\"2014 21st International Conference on High Performance Computing (HiPC)\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 21st International Conference on High Performance Computing (HiPC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HiPC.2014.7116896\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 21st International Conference on High Performance Computing (HiPC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HiPC.2014.7116896","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
GPU parallelization of the stochastic on-time arrival problem
The Stochastic On-Time Arrival (SOTA) problem has recently been studied as an alternative to traditional shortest-path formulations in situations with hard deadlines. The goal is to find a routing strategy that maximizes the probability of reaching the destination within a pre-specified time budget, with the edge weights of the graph being random variables with arbitrary distributions. While this is a practically useful formulation for vehicle routing, the commercial deployment of such methods is not currently feasible due to the high computational complexity of existing solutions. We present a parallelization strategy for improving the computation times by multiple orders of magnitude compared to the single threaded CPU implementations, using a CUDA GPU implementation. A single order of magnitude is achieved via naive parallelization of the problem, and another order of magnitude via optimal utilization of the GPU resources. We also show that the runtime can be further reduced in certain cases using dynamic thread assignment and an edge clustering method for accelerating queries with a small time budget.