LongCoder: A Long-Range Pre-trained Language Model for Code Completion

Daya Guo, Canwen Xu, Nan Duan, Jian Yin, Julian Mcauley
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:12098-12107, 2023.

Abstract

In this paper, we introduce a new task for code completion that focuses on handling long code input and propose a sparse Transformer model, called LongCoder, to address this task. LongCoder employs a sliding window mechanism for self-attention and introduces two types of globally accessible tokens - bridge tokens and memory tokens - to improve performance and efficiency. Bridge tokens are inserted throughout the input sequence to aggregate local information and facilitate global interaction, while memory tokens are included to highlight important statements that may be invoked later and need to be memorized, such as package imports and definitions of classes, functions, or structures. We conduct experiments on a newly constructed dataset that contains longer code context and the publicly available CodeXGLUE benchmark. Experimental results demonstrate that LongCoder achieves superior performance on code completion tasks compared to previous models while maintaining comparable efficiency in terms of computational resources during inference.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-guo23j, title = {{L}ong{C}oder: A Long-Range Pre-trained Language Model for Code Completion}, author = {Guo, Daya and Xu, Canwen and Duan, Nan and Yin, Jian and Mcauley, Julian}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {12098--12107}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/guo23j/guo23j.pdf}, url = {https://proceedings.mlr.press/v202/guo23j.html}, abstract = {In this paper, we introduce a new task for code completion that focuses on handling long code input and propose a sparse Transformer model, called LongCoder, to address this task. LongCoder employs a sliding window mechanism for self-attention and introduces two types of globally accessible tokens - bridge tokens and memory tokens - to improve performance and efficiency. Bridge tokens are inserted throughout the input sequence to aggregate local information and facilitate global interaction, while memory tokens are included to highlight important statements that may be invoked later and need to be memorized, such as package imports and definitions of classes, functions, or structures. We conduct experiments on a newly constructed dataset that contains longer code context and the publicly available CodeXGLUE benchmark. Experimental results demonstrate that LongCoder achieves superior performance on code completion tasks compared to previous models while maintaining comparable efficiency in terms of computational resources during inference.} }
Endnote
%0 Conference Paper %T LongCoder: A Long-Range Pre-trained Language Model for Code Completion %A Daya Guo %A Canwen Xu %A Nan Duan %A Jian Yin %A Julian Mcauley %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-guo23j %I PMLR %P 12098--12107 %U https://proceedings.mlr.press/v202/guo23j.html %V 202 %X In this paper, we introduce a new task for code completion that focuses on handling long code input and propose a sparse Transformer model, called LongCoder, to address this task. LongCoder employs a sliding window mechanism for self-attention and introduces two types of globally accessible tokens - bridge tokens and memory tokens - to improve performance and efficiency. Bridge tokens are inserted throughout the input sequence to aggregate local information and facilitate global interaction, while memory tokens are included to highlight important statements that may be invoked later and need to be memorized, such as package imports and definitions of classes, functions, or structures. We conduct experiments on a newly constructed dataset that contains longer code context and the publicly available CodeXGLUE benchmark. Experimental results demonstrate that LongCoder achieves superior performance on code completion tasks compared to previous models while maintaining comparable efficiency in terms of computational resources during inference.
APA
Guo, D., Xu, C., Duan, N., Yin, J. & Mcauley, J.. (2023). LongCoder: A Long-Range Pre-trained Language Model for Code Completion. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:12098-12107 Available from https://proceedings.mlr.press/v202/guo23j.html.

Related Material