L. Chiou, Tsung-Han Yang, Jian-Tang Syu, Che-Pin Chang, Yeong-Jar Chang
{"title":"GPU Warp Scheduler的智能策略选择","authors":"L. Chiou, Tsung-Han Yang, Jian-Tang Syu, Che-Pin Chang, Yeong-Jar Chang","doi":"10.1109/AICAS.2019.8771596","DOIUrl":null,"url":null,"abstract":"The graphics processing unit (GPU) is widely used in applications that require massive computing resources such as big data, machine learning, computer vision, etc. As the diversity of applications grows, the GPU’s performance becomes difficult to maintain by its warp scheduler. Most of the prior studies of the warp scheduler are based on static analysis of GPU hardware behavior for certain types of benchmarks. We propose for the first time (to the best of our knowledge), a machine learning approach to intelligently select suitable policies for various applications in runtime. The simulation results indicate that the proposed approach can maintain performance comparable to the best policy across different applications.","PeriodicalId":273095,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Intelligent Policy Selection for GPU Warp Scheduler\",\"authors\":\"L. Chiou, Tsung-Han Yang, Jian-Tang Syu, Che-Pin Chang, Yeong-Jar Chang\",\"doi\":\"10.1109/AICAS.2019.8771596\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The graphics processing unit (GPU) is widely used in applications that require massive computing resources such as big data, machine learning, computer vision, etc. As the diversity of applications grows, the GPU’s performance becomes difficult to maintain by its warp scheduler. Most of the prior studies of the warp scheduler are based on static analysis of GPU hardware behavior for certain types of benchmarks. We propose for the first time (to the best of our knowledge), a machine learning approach to intelligently select suitable policies for various applications in runtime. The simulation results indicate that the proposed approach can maintain performance comparable to the best policy across different applications.\",\"PeriodicalId\":273095,\"journal\":{\"name\":\"2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS)\",\"volume\":\"113 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AICAS.2019.8771596\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICAS.2019.8771596","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
摘要
图形处理单元(graphics processing unit, GPU)被广泛应用于大数据、机器学习、计算机视觉等需要大量计算资源的应用中。随着应用程序多样性的增长,GPU的性能变得难以通过其warp调度器来维持。先前对warp调度器的大多数研究都是基于对特定类型基准测试的GPU硬件行为的静态分析。我们首次(据我们所知)提出了一种机器学习方法,可以在运行时为各种应用程序智能地选择合适的策略。仿真结果表明,该方法可以在不同的应用程序中保持与最佳策略相当的性能。
Intelligent Policy Selection for GPU Warp Scheduler
The graphics processing unit (GPU) is widely used in applications that require massive computing resources such as big data, machine learning, computer vision, etc. As the diversity of applications grows, the GPU’s performance becomes difficult to maintain by its warp scheduler. Most of the prior studies of the warp scheduler are based on static analysis of GPU hardware behavior for certain types of benchmarks. We propose for the first time (to the best of our knowledge), a machine learning approach to intelligently select suitable policies for various applications in runtime. The simulation results indicate that the proposed approach can maintain performance comparable to the best policy across different applications.