多实例 GPU 上的最佳工作负载配置

Bekir Turkkan, Pavankumar Murali, Pavithra Harsha, Rohan Arora, Gerard Vanloo, Chandra Narayanaswami
{"title":"多实例 GPU 上的最佳工作负载配置","authors":"Bekir Turkkan, Pavankumar Murali, Pavithra Harsha, Rohan Arora, Gerard Vanloo, Chandra Narayanaswami","doi":"arxiv-2409.06646","DOIUrl":null,"url":null,"abstract":"There is an urgent and pressing need to optimize usage of Graphical\nProcessing Units (GPUs), which have arguably become one of the most expensive\nand sought after IT resources. To help with this goal, several of the current\ngeneration of GPUs support a partitioning feature, called Multi-Instance GPU\n(MIG) to allow multiple workloads to share a GPU, albeit with some constraints.\nIn this paper we investigate how to optimize the placement of Large Language\nModel (LLM)-based AI Inferencing workloads on GPUs. We first identify and\npresent several use cases that are encountered in practice that require\nworkloads to be efficiently placed or migrated to other GPUs to make room for\nincoming workloads. The overarching goal is to use as few GPUs as possible and\nto further minimize memory and compute wastage on GPUs that are utilized. We\nhave developed two approaches to address this problem: an optimization method\nand a heuristic method. We benchmark these with two workload scheduling\nheuristics for multiple use cases. Our results show up to 2.85x improvement in\nthe number of GPUs used and up to 70% reduction in GPU wastage over baseline\nheuristics. We plan to enable the SRE community to leverage our proposed method\nin production environments.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"410 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Optimal Workload Placement on Multi-Instance GPUs\",\"authors\":\"Bekir Turkkan, Pavankumar Murali, Pavithra Harsha, Rohan Arora, Gerard Vanloo, Chandra Narayanaswami\",\"doi\":\"arxiv-2409.06646\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There is an urgent and pressing need to optimize usage of Graphical\\nProcessing Units (GPUs), which have arguably become one of the most expensive\\nand sought after IT resources. To help with this goal, several of the current\\ngeneration of GPUs support a partitioning feature, called Multi-Instance GPU\\n(MIG) to allow multiple workloads to share a GPU, albeit with some constraints.\\nIn this paper we investigate how to optimize the placement of Large Language\\nModel (LLM)-based AI Inferencing workloads on GPUs. We first identify and\\npresent several use cases that are encountered in practice that require\\nworkloads to be efficiently placed or migrated to other GPUs to make room for\\nincoming workloads. The overarching goal is to use as few GPUs as possible and\\nto further minimize memory and compute wastage on GPUs that are utilized. We\\nhave developed two approaches to address this problem: an optimization method\\nand a heuristic method. We benchmark these with two workload scheduling\\nheuristics for multiple use cases. Our results show up to 2.85x improvement in\\nthe number of GPUs used and up to 70% reduction in GPU wastage over baseline\\nheuristics. We plan to enable the SRE community to leverage our proposed method\\nin production environments.\",\"PeriodicalId\":501422,\"journal\":{\"name\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"volume\":\"410 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06646\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06646","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

图形处理器(GPU)已成为最昂贵、最受追捧的 IT 资源之一,优化图形处理器(GPU)的使用已迫在眉睫。为了帮助实现这一目标,当前几代 GPU 都支持一种名为 "多实例 GPU(MIG)"的分区功能,允许多个工作负载共享一个 GPU,尽管会受到一些限制。在本文中,我们研究了如何优化基于大型语言模型(LLM)的人工智能推理工作负载在 GPU 上的布局。我们首先确定并介绍了在实践中遇到的几种用例,这些用例要求将工作负载有效地放置或迁移到其他 GPU 上,以便为即将到来的工作负载腾出空间。我们的总体目标是尽可能少地使用 GPU,并进一步减少已使用 GPU 上的内存和计算资源的浪费。我们开发了两种方法来解决这个问题:一种是优化方法,另一种是启发式方法。我们使用两种工作负载调度启发式方法对多个使用案例进行了基准测试。我们的结果表明,与基线启发式相比,所使用的 GPU 数量最多可提高 2.85 倍,GPU 浪费最多可减少 70%。我们计划让 SRE 社区在生产环境中利用我们提出的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Optimal Workload Placement on Multi-Instance GPUs
There is an urgent and pressing need to optimize usage of Graphical Processing Units (GPUs), which have arguably become one of the most expensive and sought after IT resources. To help with this goal, several of the current generation of GPUs support a partitioning feature, called Multi-Instance GPU (MIG) to allow multiple workloads to share a GPU, albeit with some constraints. In this paper we investigate how to optimize the placement of Large Language Model (LLM)-based AI Inferencing workloads on GPUs. We first identify and present several use cases that are encountered in practice that require workloads to be efficiently placed or migrated to other GPUs to make room for incoming workloads. The overarching goal is to use as few GPUs as possible and to further minimize memory and compute wastage on GPUs that are utilized. We have developed two approaches to address this problem: an optimization method and a heuristic method. We benchmark these with two workload scheduling heuristics for multiple use cases. Our results show up to 2.85x improvement in the number of GPUs used and up to 70% reduction in GPU wastage over baseline heuristics. We plan to enable the SRE community to leverage our proposed method in production environments.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信