AI

LazyLLM: Dynamic Token Pruning for Efficient Long Context LLM Inference

1 Mins read

This paper was accepted at the Efficient Systems for Foundation Models Workshop at ICML 2024

The inference of transformer-based large language models consists of two sequential stages: 1) a prefilling stage to compute the KV cache of prompts and generate the first token, and 2) a decoding stage to generate subsequent tokens. For long prompts, the KV cache must be computed for all tokens during the prefilling stage, which can significantly increase the time needed to generate the first token. Consequently, the prefilling stage may become a bottleneck in the generation process. An open question remains whether all prompt tokens are essential for generating the first token. To answer this, we introduce a novel method, LazyLLM, that selectively computes the KV for tokens important for the next token prediction in both the prefilling and decoding stages. Contrary to static pruning approaches that prune the prompt at once, LazyLLM allows language models to dynamically select different subsets of tokens from the context in different generation steps, even though they might be pruned in previous steps. Extensive experiments on standard datasets across various tasks demonstrate that LazyLLM is a generic method that can be seamlessly integrated with existing language models to significantly accelerate the generation without fine-tuning. For instance, in the multi-document question-answering task, LazyLLM accelerates the prefilling stage of the LLama 2 7B model by 2.34x while maintaining accuracy.


Source link

Related posts
AI

Alibaba Just Released Marco-o1: Advancing Open-Ended Reasoning in AI

3 Mins read
The field of AI is progressing rapidly, particularly in areas requiring deep reasoning capabilities. However, many existing large models are narrowly focused,…
AI

MIT researchers develop an efficient way to train more reliable AI agents | MIT News

4 Mins read
Fields ranging from robotics to medicine to political science are attempting to train AI systems to make meaningful decisions of all kinds….
AI

Meet Arch 0.1.3: Open-Source Intelligent Proxy for AI Agents

3 Mins read
The integration of AI agents into various workflows has increased the need for intelligent coordination, data routing, and enhanced security among systems….

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *