LLM Maybe LongLM: SelfExtend LLM Context Window Without Tuning

Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, Xia Hu
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:22099-22114, 2024.

Abstract

It is well known that LLMs cannot generalize well to long contexts whose lengths are larger than the training sequence length. This poses challenges when employing LLMs for processing long input sequences during inference. In this work, we argue that LLMs themselves have inherent capabilities to handles s long contexts without fine-tuning. To achieve this goal, we propose SelfExtend to extend the context window of LLMs by constructing bi-level attention information: the grouped attention and the neighbor attention. The grouped attention captures the dependencies among tokens that are far apart, while neighbor attention captures dependencies among adjacent tokens within a specified range. The two-level attentions are computed based on the original model’s self-attention mechanism during inference. With minor code modification, our SelfExtend can effortlessly extend existing LLMs’ context window without any fine-tuning. We conduct comprehensive experiments on multiple benchmarks and the results show that our SelfExtend can effectively extend existing LLMs’ context window length.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-jin24b, title = {{LLM} Maybe {L}ong{LM}: {S}elf{E}xtend {LLM} Context Window Without Tuning}, author = {Jin, Hongye and Han, Xiaotian and Yang, Jingfeng and Jiang, Zhimeng and Liu, Zirui and Chang, Chia-Yuan and Chen, Huiyuan and Hu, Xia}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {22099--22114}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/jin24b/jin24b.pdf}, url = {https://proceedings.mlr.press/v235/jin24b.html}, abstract = {It is well known that LLMs cannot generalize well to long contexts whose lengths are larger than the training sequence length. This poses challenges when employing LLMs for processing long input sequences during inference. In this work, we argue that LLMs themselves have inherent capabilities to handles s long contexts without fine-tuning. To achieve this goal, we propose SelfExtend to extend the context window of LLMs by constructing bi-level attention information: the grouped attention and the neighbor attention. The grouped attention captures the dependencies among tokens that are far apart, while neighbor attention captures dependencies among adjacent tokens within a specified range. The two-level attentions are computed based on the original model’s self-attention mechanism during inference. With minor code modification, our SelfExtend can effortlessly extend existing LLMs’ context window without any fine-tuning. We conduct comprehensive experiments on multiple benchmarks and the results show that our SelfExtend can effectively extend existing LLMs’ context window length.} }
Endnote
%0 Conference Paper %T LLM Maybe LongLM: SelfExtend LLM Context Window Without Tuning %A Hongye Jin %A Xiaotian Han %A Jingfeng Yang %A Zhimeng Jiang %A Zirui Liu %A Chia-Yuan Chang %A Huiyuan Chen %A Xia Hu %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-jin24b %I PMLR %P 22099--22114 %U https://proceedings.mlr.press/v235/jin24b.html %V 235 %X It is well known that LLMs cannot generalize well to long contexts whose lengths are larger than the training sequence length. This poses challenges when employing LLMs for processing long input sequences during inference. In this work, we argue that LLMs themselves have inherent capabilities to handles s long contexts without fine-tuning. To achieve this goal, we propose SelfExtend to extend the context window of LLMs by constructing bi-level attention information: the grouped attention and the neighbor attention. The grouped attention captures the dependencies among tokens that are far apart, while neighbor attention captures dependencies among adjacent tokens within a specified range. The two-level attentions are computed based on the original model’s self-attention mechanism during inference. With minor code modification, our SelfExtend can effortlessly extend existing LLMs’ context window without any fine-tuning. We conduct comprehensive experiments on multiple benchmarks and the results show that our SelfExtend can effectively extend existing LLMs’ context window length.
APA
Jin, H., Han, X., Yang, J., Jiang, Z., Liu, Z., Chang, C., Chen, H. & Hu, X.. (2024). LLM Maybe LongLM: SelfExtend LLM Context Window Without Tuning. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:22099-22114 Available from https://proceedings.mlr.press/v235/jin24b.html.

Related Material