{"title":"QJL: 1-Bit Quantized JL Transform for KV Cache Quantization with Zero Overhead","authors":"Amir Zandieh, Majid Daliri, Insu Han","doi":"arxiv-2406.03482","DOIUrl":null,"url":null,"abstract":"Serving LLMs requires substantial memory due to the storage requirements of\nKey-Value (KV) embeddings in the KV cache, which grows with sequence length. An\neffective approach to compress KV cache is quantization. However, traditional\nquantization methods face significant memory overhead due to the need to store\nquantization constants (at least a zero point and a scale) in full precision\nper data block. Depending on the block size, this overhead can add 1 or 2 bits\nper quantized number. We introduce QJL, a new quantization approach that\nconsists of a Johnson-Lindenstrauss (JL) transform followed by sign-bit\nquantization. In contrast to existing methods, QJL eliminates memory overheads\nby removing the need for storing quantization constants. We propose an\nasymmetric estimator for the inner product of two vectors and demonstrate that\napplying QJL to one vector and a standard JL transform without quantization to\nthe other provides an unbiased estimator with minimal distortion. We have\ndeveloped an efficient implementation of the QJL sketch and its corresponding\ninner product estimator, incorporating a lightweight CUDA kernel for optimized\ncomputation. When applied across various LLMs and NLP tasks to quantize the KV\ncache to only 3 bits, QJL demonstrates a more than fivefold reduction in KV\ncache memory usage without compromising accuracy, all while achieving faster\nruntime. Codes are available at \\url{https://github.com/amirzandieh/QJL}.","PeriodicalId":501291,"journal":{"name":"arXiv - CS - Performance","volume":"35 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Performance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2406.03482","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Serving LLMs requires substantial memory due to the storage requirements of
Key-Value (KV) embeddings in the KV cache, which grows with sequence length. An
effective approach to compress KV cache is quantization. However, traditional
quantization methods face significant memory overhead due to the need to store
quantization constants (at least a zero point and a scale) in full precision
per data block. Depending on the block size, this overhead can add 1 or 2 bits
per quantized number. We introduce QJL, a new quantization approach that
consists of a Johnson-Lindenstrauss (JL) transform followed by sign-bit
quantization. In contrast to existing methods, QJL eliminates memory overheads
by removing the need for storing quantization constants. We propose an
asymmetric estimator for the inner product of two vectors and demonstrate that
applying QJL to one vector and a standard JL transform without quantization to
the other provides an unbiased estimator with minimal distortion. We have
developed an efficient implementation of the QJL sketch and its corresponding
inner product estimator, incorporating a lightweight CUDA kernel for optimized
computation. When applied across various LLMs and NLP tasks to quantize the KV
cache to only 3 bits, QJL demonstrates a more than fivefold reduction in KV
cache memory usage without compromising accuracy, all while achieving faster
runtime. Codes are available at \url{https://github.com/amirzandieh/QJL}.