Jorge Villarrubia, Luis Costero, Francisco D. Igual, Katzalin Olcoz
{"title":"利用深度强化学习解决MIG设备上的任务调度和GPU重构问题","authors":"Jorge Villarrubia, Luis Costero, Francisco D. Igual, Katzalin Olcoz","doi":"10.1016/j.future.2025.108145","DOIUrl":null,"url":null,"abstract":"<div><div>Recent advances in dynamic GPU partitioning, such as NVIDIA’s Multi-Instance GPU (MIG) technology, have enhanced resource utilization by enabling task co-execution without contention. However, existing MIG schedulers remain limited to static or task-agnostic methods that sacrifice optimality for tractability. This paper presents a Deep Reinforcement Learning framework that seeks to minimize the completion time of a task queue by holistically addressing the dimensions of the problem: task molding, GPU reconfiguration and execution order. To manage the vast solution space, we apply optimizations such as discrete and canonical representation of states, unification of equivalent configurations, action masking, or promoting the exploration of reconfigurations; this offers insights for similar resource management scenarios. The proposed models are extensively evaluated with widely used benchmarks of the Rodinia and Altis suites, and synthetic workloads generated to emulate a wide range of plausible real situations. The final model improves to the state-of-the-art, especially in workloads that clearly contradict the assumptions of previous proposals, achieving a difference of less than 20% to the optimum. Additionally, two different approaches to the problem are faced (offline vs. online), discussing their theoretical advantages and disadvantages, and evaluating them experimentally for the final model.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"176 ","pages":"Article 108145"},"PeriodicalIF":6.2000,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Solving the task scheduling and GPU reconfiguration problem on MIG devices via deep reinforcement learning\",\"authors\":\"Jorge Villarrubia, Luis Costero, Francisco D. Igual, Katzalin Olcoz\",\"doi\":\"10.1016/j.future.2025.108145\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Recent advances in dynamic GPU partitioning, such as NVIDIA’s Multi-Instance GPU (MIG) technology, have enhanced resource utilization by enabling task co-execution without contention. However, existing MIG schedulers remain limited to static or task-agnostic methods that sacrifice optimality for tractability. This paper presents a Deep Reinforcement Learning framework that seeks to minimize the completion time of a task queue by holistically addressing the dimensions of the problem: task molding, GPU reconfiguration and execution order. To manage the vast solution space, we apply optimizations such as discrete and canonical representation of states, unification of equivalent configurations, action masking, or promoting the exploration of reconfigurations; this offers insights for similar resource management scenarios. The proposed models are extensively evaluated with widely used benchmarks of the Rodinia and Altis suites, and synthetic workloads generated to emulate a wide range of plausible real situations. The final model improves to the state-of-the-art, especially in workloads that clearly contradict the assumptions of previous proposals, achieving a difference of less than 20% to the optimum. Additionally, two different approaches to the problem are faced (offline vs. online), discussing their theoretical advantages and disadvantages, and evaluating them experimentally for the final model.</div></div>\",\"PeriodicalId\":55132,\"journal\":{\"name\":\"Future Generation Computer Systems-The International Journal of Escience\",\"volume\":\"176 \",\"pages\":\"Article 108145\"},\"PeriodicalIF\":6.2000,\"publicationDate\":\"2025-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Future Generation Computer Systems-The International Journal of Escience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167739X2500439X\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X2500439X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
Solving the task scheduling and GPU reconfiguration problem on MIG devices via deep reinforcement learning
Recent advances in dynamic GPU partitioning, such as NVIDIA’s Multi-Instance GPU (MIG) technology, have enhanced resource utilization by enabling task co-execution without contention. However, existing MIG schedulers remain limited to static or task-agnostic methods that sacrifice optimality for tractability. This paper presents a Deep Reinforcement Learning framework that seeks to minimize the completion time of a task queue by holistically addressing the dimensions of the problem: task molding, GPU reconfiguration and execution order. To manage the vast solution space, we apply optimizations such as discrete and canonical representation of states, unification of equivalent configurations, action masking, or promoting the exploration of reconfigurations; this offers insights for similar resource management scenarios. The proposed models are extensively evaluated with widely used benchmarks of the Rodinia and Altis suites, and synthetic workloads generated to emulate a wide range of plausible real situations. The final model improves to the state-of-the-art, especially in workloads that clearly contradict the assumptions of previous proposals, achieving a difference of less than 20% to the optimum. Additionally, two different approaches to the problem are faced (offline vs. online), discussing their theoretical advantages and disadvantages, and evaluating them experimentally for the final model.
期刊介绍:
Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications.
Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration.
Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.