FlashDecoding++Next: High Throughput LLM Inference With Latency and Memory Optimization

IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Guohao Dai;Ke Hong;Qiuli Mao;Xiuhong Li;Jiaming Xu;Haofeng Huang;Hongtu Xia;Xuefei Ning;Shengen Yan;Yun Liang;Yu Wang
{"title":"FlashDecoding++Next: High Throughput LLM Inference With Latency and Memory Optimization","authors":"Guohao Dai;Ke Hong;Qiuli Mao;Xiuhong Li;Jiaming Xu;Haofeng Huang;Hongtu Xia;Xuefei Ning;Shengen Yan;Yun Liang;Yu Wang","doi":"10.1109/TC.2025.3585339","DOIUrl":null,"url":null,"abstract":"As the Large Language Model (LLM) becomes increasingly important in various domains, the performance of LLM inference is crucial to massive LLM applications. However, centering around the computational efficiency and the memory utilization, the following challenges remain unsolved in achieving high-throughput LLM inference: (1) Synchronous partial softmax update. The softmax operation requires a synchronous update operation among each partial softmax result, leading to <inline-formula><tex-math>$\\sim$</tex-math></inline-formula>20% overheads for the attention computation in LLMs. (2) Under-utilized computation of flat GEMM. The shape of matrices performing GEMM in LLM inference tends to be flat, leading to under-utilized computation and 50% performance loss after padding zeros in previous designs (<i>e.g.,</i> cuBLAS, CUTLASS, etc.). (3) Memory redundancy caused by activations. Dynamic allocation of activations during inference leads to redundant storage of useless variables, bringing 22% more memory consumption. We present <i>FlashDecoding++Next</i>, a high-throughput inference engine supporting mainstream LLMs and hardware backends. To tackle the above challenges, <i>FlashDecoding++Next</i> creatively proposes: <b>(1) Asynchronous softmax with unified maximum.</b> <i>FlashDecoding++Next</i> introduces a unified maximum technique for different partial softmax computations to avoid synchronization. Based on this, a fine-grained pipelining is proposed, leading to 1.18<inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula> and 1.14<inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula> for the <i>prefill</i> and decode phases in LLM inference, respectively. <b>(2) Flat GEMM optimization with double buffering.</b> <i>FlashDecoding++Next</i> points out that flat GEMMs with different shapes face varied bottlenecks. Then, techniques like double buffering are introduced, resulting in up to 52% speedup for the flat GEMM operation. (3) Buffer reusing and unified memory management. <i>FlashDecoding++Next</i> reuses the pre-allocated activation buffers throughout the inference process to remove redundancy. Based on that, we unify the management of different types of storage to further exploit the reusing opportunity. The memory optimization enables up to 1.57<inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula> longer sequence to be processed. <i>FlashDecoding++Next</i> demonstrates remarkable throughput improvement, delivering up to <b>68.88</b><inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula> higher throughput compared to the HuggingFace <xref>[1]</xref> implementation. On average, <i>FlashDecoding++Next</i> achieves <b>1.25</b><inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula> and <b>1.46</b><inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula> higher throughput compared to vLLM <xref>[2]</xref> and TensorRT-LLM <xref>[3]</xref> on mainstream LLMs.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 10","pages":"3263-3276"},"PeriodicalIF":3.8000,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computers","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11062854/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

As the Large Language Model (LLM) becomes increasingly important in various domains, the performance of LLM inference is crucial to massive LLM applications. However, centering around the computational efficiency and the memory utilization, the following challenges remain unsolved in achieving high-throughput LLM inference: (1) Synchronous partial softmax update. The softmax operation requires a synchronous update operation among each partial softmax result, leading to $\sim$20% overheads for the attention computation in LLMs. (2) Under-utilized computation of flat GEMM. The shape of matrices performing GEMM in LLM inference tends to be flat, leading to under-utilized computation and 50% performance loss after padding zeros in previous designs (e.g., cuBLAS, CUTLASS, etc.). (3) Memory redundancy caused by activations. Dynamic allocation of activations during inference leads to redundant storage of useless variables, bringing 22% more memory consumption. We present FlashDecoding++Next, a high-throughput inference engine supporting mainstream LLMs and hardware backends. To tackle the above challenges, FlashDecoding++Next creatively proposes: (1) Asynchronous softmax with unified maximum. FlashDecoding++Next introduces a unified maximum technique for different partial softmax computations to avoid synchronization. Based on this, a fine-grained pipelining is proposed, leading to 1.18$\boldsymbol{\times}$ and 1.14$\boldsymbol{\times}$ for the prefill and decode phases in LLM inference, respectively. (2) Flat GEMM optimization with double buffering. FlashDecoding++Next points out that flat GEMMs with different shapes face varied bottlenecks. Then, techniques like double buffering are introduced, resulting in up to 52% speedup for the flat GEMM operation. (3) Buffer reusing and unified memory management. FlashDecoding++Next reuses the pre-allocated activation buffers throughout the inference process to remove redundancy. Based on that, we unify the management of different types of storage to further exploit the reusing opportunity. The memory optimization enables up to 1.57$\boldsymbol{\times}$ longer sequence to be processed. FlashDecoding++Next demonstrates remarkable throughput improvement, delivering up to 68.88$\boldsymbol{\times}$ higher throughput compared to the HuggingFace [1] implementation. On average, FlashDecoding++Next achieves 1.25$\boldsymbol{\times}$ and 1.46$\boldsymbol{\times}$ higher throughput compared to vLLM [2] and TensorRT-LLM [3] on mainstream LLMs.
FlashDecoding++Next:高吞吐量LLM推理与延迟和内存优化
随着大型语言模型(LLM)在各个领域的重要性日益提高,LLM推理的性能对大规模LLM应用至关重要。然而,围绕计算效率和内存利用率,实现高吞吐量LLM推理还面临以下挑战:(1)同步部分softmax更新。softmax操作需要在每个部分softmax结果之间进行同步更新操作,导致llm中注意力计算的开销为20%。(2)平面GEMM计算未充分利用。在LLM推理中执行GEMM的矩阵的形状趋于平坦,导致未充分利用的计算和在以前的设计(例如,cuBLAS, CUTLASS等)填充零后50%的性能损失。(3)激活引起的内存冗余。在推理期间动态分配激活会导致无用变量的冗余存储,从而增加22%的内存消耗。我们提出FlashDecoding++Next,一个支持主流llm和硬件后端的高吞吐量推理引擎。为了解决上述挑战,FlashDecoding++Next创造性地提出:(1)统一最大值的异步softmax。接下来介绍了一种统一的最大值技术,用于不同部分的softmax计算,以避免同步。在此基础上,提出了一种细粒度的流水线,在LLM推理中预填充阶段和解码阶段分别得到1.18$\boldsymbol{\times}$和1.14$\boldsymbol{\times}$。(2)双缓冲平面GEMM优化。FlashDecoding++Next指出,不同形状的平面gem面临不同的瓶颈。然后,引入了双缓冲之类的技术,使平面GEMM操作的速度提高了52%。(3)缓冲区重用和统一内存管理。FlashDecoding++Next在整个推理过程中重用预分配的激活缓冲区以消除冗余。在此基础上,我们统一了对不同类型存储的管理,进一步挖掘了重用的机会。内存优化可以处理最多1.57$\boldsymbol{\times}$长的序列。FlashDecoding++Next展示了显著的吞吐量改进,与HuggingFace[1]实现相比,提供高达68.88$\boldsymbol{\times}$的高吞吐量。与主流llm上的vLLM[2]和TensorRT-LLM[3]相比,FlashDecoding++Next平均实现了1.25$\boldsymbol{\times}$和1.46$\boldsymbol{\times}$的高吞吐量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Computers
IEEE Transactions on Computers 工程技术-工程:电子与电气
CiteScore
6.60
自引率
5.40%
发文量
199
审稿时长
6.0 months
期刊介绍: The IEEE Transactions on Computers is a monthly publication with a wide distribution to researchers, developers, technical managers, and educators in the computer field. It publishes papers on research in areas of current interest to the readers. These areas include, but are not limited to, the following: a) computer organizations and architectures; b) operating systems, software systems, and communication protocols; c) real-time systems and embedded systems; d) digital devices, computer components, and interconnection networks; e) specification, design, prototyping, and testing methods and tools; f) performance, fault tolerance, reliability, security, and testability; g) case studies and experimental and theoretical evaluations; and h) new and important applications and trends.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信