FlashFlex: Accommodating Large Language Model Training over Heterogeneous Environment

Ran Yan, Youhe Jiang, Wangcheng Tao, Xiaonan Nie, Bin Cui, Binhang Yuan
{"title":"FlashFlex: Accommodating Large Language Model Training over Heterogeneous Environment","authors":"Ran Yan, Youhe Jiang, Wangcheng Tao, Xiaonan Nie, Bin Cui, Binhang Yuan","doi":"arxiv-2409.01143","DOIUrl":null,"url":null,"abstract":"Training large language model (LLM) is a computationally intensive task,\nwhich is typically conducted in data centers with homogeneous high-performance\nGPUs. This paper explores an alternative approach by deploying the training\ncomputation across heterogeneous GPUs to enable better flexibility and\nefficiency for heterogeneous resource utilization. To achieve this goal, we\npropose a novel system, FlashFlex, that can flexibly support an asymmetric\npartition of the parallel training computations across the scope of data-,\npipeline-, and tensor model parallelism. We further formalize the allocation of\nasymmetric partitioned training computations over a set of heterogeneous GPUs\nas a constrained optimization problem and propose an efficient solution based\non a hierarchical graph partitioning algorithm. Our approach can adaptively\nallocate asymmetric training computations across GPUs, fully leveraging the\navailable computational power. We conduct extensive empirical studies to\nevaluate the performance of FlashFlex, where we find that when training LLMs at\ndifferent scales (from 7B to 30B), FlashFlex can achieve comparable training\nMFU when running over a set of heterogeneous GPUs compared with the state of\nthe art training systems running over a set of homogeneous high-performance\nGPUs with the same amount of total peak FLOPS. The achieved smallest gaps in\nMFU are 11.61% and 0.30%, depending on whether the homogeneous setting is\nequipped with and without RDMA. Our implementation is available at\nhttps://github.com/Relaxed-System-Lab/FlashFlex.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"17 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.01143","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Training large language model (LLM) is a computationally intensive task, which is typically conducted in data centers with homogeneous high-performance GPUs. This paper explores an alternative approach by deploying the training computation across heterogeneous GPUs to enable better flexibility and efficiency for heterogeneous resource utilization. To achieve this goal, we propose a novel system, FlashFlex, that can flexibly support an asymmetric partition of the parallel training computations across the scope of data-, pipeline-, and tensor model parallelism. We further formalize the allocation of asymmetric partitioned training computations over a set of heterogeneous GPUs as a constrained optimization problem and propose an efficient solution based on a hierarchical graph partitioning algorithm. Our approach can adaptively allocate asymmetric training computations across GPUs, fully leveraging the available computational power. We conduct extensive empirical studies to evaluate the performance of FlashFlex, where we find that when training LLMs at different scales (from 7B to 30B), FlashFlex can achieve comparable training MFU when running over a set of heterogeneous GPUs compared with the state of the art training systems running over a set of homogeneous high-performance GPUs with the same amount of total peak FLOPS. The achieved smallest gaps in MFU are 11.61% and 0.30%, depending on whether the homogeneous setting is equipped with and without RDMA. Our implementation is available at https://github.com/Relaxed-System-Lab/FlashFlex.
FlashFlex:适应异构环境下的大型语言模型训练
训练大型语言模型(LLM)是一项计算密集型任务,通常在配备同构高性能 GPU 的数据中心进行。本文探讨了另一种方法,即在异构 GPU 上部署训练计算,从而提高异构资源利用的灵活性和效率。为了实现这一目标,我们提出了一种新颖的系统--FlashFlex,它可以灵活地支持数据、流水线和张量模型并行范围内并行训练计算的非对称分区。我们进一步将非对称分区训练计算在一组异构 GPU 上的分配形式化为一个受限优化问题,并提出了一种基于分层图分区算法的高效解决方案。我们的方法可以在 GPU 上自适应地分配非对称训练计算,充分利用现有的计算能力。我们进行了广泛的实证研究来评估FlashFlex的性能,结果发现,当训练不同规模(从7B到30B)的LLM时,FlashFlex在一组异构GPU上运行时,与在一组同构高性能GPU上运行的具有相同峰值FLOPS总量的最新训练系统相比,可以达到相当的训练MFU。MFU的最小差距分别为11.61%和0.30%,这取决于同构设置是否配备RDMA。我们的实现可在https://github.com/Relaxed-System-Lab/FlashFlex。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信