CacheGen: Fast Context Loading for Language Model Applications via KV Cache Streaming

  • Yuhan Liu ,
  • Hanchen Li ,
  • Yihua Cheng ,
  • Siddhant Ray ,
  • Yuyang Huang ,
  • Qizheng Zhang ,
  • Kuntai Du ,
  • Jiayi Yao ,
  • ,
  • ,
  • Michael Maire ,
  • Henry Hoffmann ,
  • Ari Holtzman ,
  • Junchen Jiang

SIGCOMM |

Published by ACM | Organized by ACM SIGCOMM

As large language models (LLMs) take on complex tasks, their inputs are supplemented with longer contexts that incorporate domain knowledge or user-specific information. Yet using long contexts poses a challenge for responsive LLM systems, as nothing can be generated until the whole context is processed by the LLM. While the context-processing delay can be reduced by reusing the KV cache of a context across different inputs, fetching the KV cache, which contains large tensors, over the network can cause extra network delays.

CacheGen is a fast context-loading module for LLM systems. First, CacheGen uses a custom tensor encoder, which embraces KV cache’s distributional properties, to encode a KV cache into more compact bitstream representations with negligible encoding/decoding overhead. This reduces the bandwidth demand to fetch the KV cache. Second, to maintain low context-loading delay and high generation quality, CacheGen adapts the streaming strategies to cope with changes in available bandwidth. When available bandwidth drops, CacheGen may raise the compression level for a part of the context or choose to recompute its KV cache on the fly.

We test CacheGen on four popular LLMs of various sizes and four datasets (662 contexts in total). Compared to the recent systems that reuse the KV cache, CacheGen reduces the KV cache size by 3.5-4.3× and the total delay in fetching and processing contexts by 3.2-3.7× while having negligible impact
on the LLM response quality in accuracy or perplexity.