下载
加载中…
MInference: Accelerating Pre-filling for Long-context LLMs via Dynamic Sparse Attention
2024年5月
MInference 1.0 leverages the dynamic sparse nature of LLMs’ attention, which exhibits some static patterns, to speed up the pre-filling for long-context LLMs. It first determines offline which sparse pattern each head belongs to, then approximates the sparse index online…