{"title":"CSKV:长上下文场景中 KV 高速缓存的高效通道缩减训练","authors":"Luning Wang, Shiyao Li, Xuefei Ning, Zhihang Yuan, Shengen Yan, Guohao Dai, Yu Wang","doi":"arxiv-2409.10593","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLMs) have been widely adopted to process long-context\ntasks. However, the large memory overhead of the key-value (KV) cache poses\nsignificant challenges in long-context scenarios. Existing training-free KV\ncache compression methods typically focus on quantization and token pruning,\nwhich have compression limits, and excessive sparsity can lead to severe\nperformance degradation. Other methods design new architectures with less KV\noverhead but require significant training overhead. To address the above two\ndrawbacks, we further explore the redundancy in the channel dimension and apply\nan architecture-level design with minor training costs. Therefore, we introduce\nCSKV, a training-efficient Channel Shrinking technique for KV cache\ncompression: (1) We first analyze the singular value distribution of the KV\ncache, revealing significant redundancy and compression potential along the\nchannel dimension. Based on this observation, we propose using low-rank\ndecomposition for key and value layers and storing the low-dimension features.\n(2) To preserve model performance, we introduce a bi-branch KV cache, including\na window-based full-precision KV cache and a low-precision compressed KV cache.\n(3) To reduce the training costs, we minimize the layer-wise reconstruction\nloss for the compressed KV cache instead of retraining the entire LLMs.\nExtensive experiments show that CSKV can reduce the memory overhead of the KV\ncache by 80% while maintaining the model's long-context capability. Moreover,\nwe show that our method can be seamlessly combined with quantization to further\nreduce the memory overhead, achieving a compression ratio of up to 95%.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CSKV: Training-Efficient Channel Shrinking for KV Cache in Long-Context Scenarios\",\"authors\":\"Luning Wang, Shiyao Li, Xuefei Ning, Zhihang Yuan, Shengen Yan, Guohao Dai, Yu Wang\",\"doi\":\"arxiv-2409.10593\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large Language Models (LLMs) have been widely adopted to process long-context\\ntasks. However, the large memory overhead of the key-value (KV) cache poses\\nsignificant challenges in long-context scenarios. Existing training-free KV\\ncache compression methods typically focus on quantization and token pruning,\\nwhich have compression limits, and excessive sparsity can lead to severe\\nperformance degradation. Other methods design new architectures with less KV\\noverhead but require significant training overhead. To address the above two\\ndrawbacks, we further explore the redundancy in the channel dimension and apply\\nan architecture-level design with minor training costs. Therefore, we introduce\\nCSKV, a training-efficient Channel Shrinking technique for KV cache\\ncompression: (1) We first analyze the singular value distribution of the KV\\ncache, revealing significant redundancy and compression potential along the\\nchannel dimension. Based on this observation, we propose using low-rank\\ndecomposition for key and value layers and storing the low-dimension features.\\n(2) To preserve model performance, we introduce a bi-branch KV cache, including\\na window-based full-precision KV cache and a low-precision compressed KV cache.\\n(3) To reduce the training costs, we minimize the layer-wise reconstruction\\nloss for the compressed KV cache instead of retraining the entire LLMs.\\nExtensive experiments show that CSKV can reduce the memory overhead of the KV\\ncache by 80% while maintaining the model's long-context capability. Moreover,\\nwe show that our method can be seamlessly combined with quantization to further\\nreduce the memory overhead, achieving a compression ratio of up to 95%.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.10593\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10593","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
CSKV: Training-Efficient Channel Shrinking for KV Cache in Long-Context Scenarios
Large Language Models (LLMs) have been widely adopted to process long-context
tasks. However, the large memory overhead of the key-value (KV) cache poses
significant challenges in long-context scenarios. Existing training-free KV
cache compression methods typically focus on quantization and token pruning,
which have compression limits, and excessive sparsity can lead to severe
performance degradation. Other methods design new architectures with less KV
overhead but require significant training overhead. To address the above two
drawbacks, we further explore the redundancy in the channel dimension and apply
an architecture-level design with minor training costs. Therefore, we introduce
CSKV, a training-efficient Channel Shrinking technique for KV cache
compression: (1) We first analyze the singular value distribution of the KV
cache, revealing significant redundancy and compression potential along the
channel dimension. Based on this observation, we propose using low-rank
decomposition for key and value layers and storing the low-dimension features.
(2) To preserve model performance, we introduce a bi-branch KV cache, including
a window-based full-precision KV cache and a low-precision compressed KV cache.
(3) To reduce the training costs, we minimize the layer-wise reconstruction
loss for the compressed KV cache instead of retraining the entire LLMs.
Extensive experiments show that CSKV can reduce the memory overhead of the KV
cache by 80% while maintaining the model's long-context capability. Moreover,
we show that our method can be seamlessly combined with quantization to further
reduce the memory overhead, achieving a compression ratio of up to 95%.