Lean Copilot: Large Language Models as Copilots for Theorem Proving in Lean

Peiyang Song, Kaiyu Yang, Anima Anandkumar
Proceedings of the International Conference on Neuro-symbolic Systems, PMLR 288:144-169, 2025.

Abstract

Neural theorem proving combines large language models (LLMs) with proof assistants such as Lean, where the correctness of formal proofs can be rigorously verified, leaving no room for hallucination. With existing neural theorem provers pretrained on a fixed collection of data and offering valuable suggestions at times, it is challenging for them to continually prove novel theorems in a fully autonomous mode, where human insights may be critical. In this paper, we explore LLMs as copilots that assist humans in proving theorems. We introduce Lean Copilot, a general framework for running LLM inference natively in Lean. It enables programmers to build various LLM-based proof automation tools that integrate seamlessly into the workflow of Lean users. Lean users can use our pretrained models or bring their own ones that run either locally (with or without GPUs) or on the cloud. Using Lean Copilot, we build LLM-based tools that suggest proof steps, complete proof goals, and select relevant premises. Experimental results on the Mathematics in Lean textbook demonstrate the effectiveness of our method compared to existing rule-based proof automation in Lean (AESOP), confirming the significance of having LLMs integrated into the theorem proving workflow in Lean. When assisting humans, Lean Copilot requires only 2.08 manually-entered proof steps on average (3.86 required by AESOP); when automating the theorem proving process, Lean Copilot automates 74.2% proof steps on average, 85% better than AESOP (40.1%). We open source all code and artifacts under a permissive MIT license to facilitate further research.

Cite this Paper


BibTeX
@InProceedings{pmlr-v288-song25a, title = {Lean Copilot: Large Language Models as Copilots for Theorem Proving in Lean}, author = {Song, Peiyang and Yang, Kaiyu and Anandkumar, Anima}, booktitle = {Proceedings of the International Conference on Neuro-symbolic Systems}, pages = {144--169}, year = {2025}, editor = {Pappas, George and Ravikumar, Pradeep and Seshia, Sanjit A.}, volume = {288}, series = {Proceedings of Machine Learning Research}, month = {28--30 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v288/main/assets/song25a/song25a.pdf}, url = {https://proceedings.mlr.press/v288/song25a.html}, abstract = {Neural theorem proving combines large language models (LLMs) with proof assistants such as Lean, where the correctness of formal proofs can be rigorously verified, leaving no room for hallucination. With existing neural theorem provers pretrained on a fixed collection of data and offering valuable suggestions at times, it is challenging for them to continually prove novel theorems in a fully autonomous mode, where human insights may be critical. In this paper, we explore LLMs as copilots that assist humans in proving theorems. We introduce Lean Copilot, a general framework for running LLM inference natively in Lean. It enables programmers to build various LLM-based proof automation tools that integrate seamlessly into the workflow of Lean users. Lean users can use our pretrained models or bring their own ones that run either locally (with or without GPUs) or on the cloud. Using Lean Copilot, we build LLM-based tools that suggest proof steps, complete proof goals, and select relevant premises. Experimental results on the Mathematics in Lean textbook demonstrate the effectiveness of our method compared to existing rule-based proof automation in Lean (AESOP), confirming the significance of having LLMs integrated into the theorem proving workflow in Lean. When assisting humans, Lean Copilot requires only 2.08 manually-entered proof steps on average (3.86 required by AESOP); when automating the theorem proving process, Lean Copilot automates 74.2% proof steps on average, 85% better than AESOP (40.1%). We open source all code and artifacts under a permissive MIT license to facilitate further research.} }
Endnote
%0 Conference Paper %T Lean Copilot: Large Language Models as Copilots for Theorem Proving in Lean %A Peiyang Song %A Kaiyu Yang %A Anima Anandkumar %B Proceedings of the International Conference on Neuro-symbolic Systems %C Proceedings of Machine Learning Research %D 2025 %E George Pappas %E Pradeep Ravikumar %E Sanjit A. Seshia %F pmlr-v288-song25a %I PMLR %P 144--169 %U https://proceedings.mlr.press/v288/song25a.html %V 288 %X Neural theorem proving combines large language models (LLMs) with proof assistants such as Lean, where the correctness of formal proofs can be rigorously verified, leaving no room for hallucination. With existing neural theorem provers pretrained on a fixed collection of data and offering valuable suggestions at times, it is challenging for them to continually prove novel theorems in a fully autonomous mode, where human insights may be critical. In this paper, we explore LLMs as copilots that assist humans in proving theorems. We introduce Lean Copilot, a general framework for running LLM inference natively in Lean. It enables programmers to build various LLM-based proof automation tools that integrate seamlessly into the workflow of Lean users. Lean users can use our pretrained models or bring their own ones that run either locally (with or without GPUs) or on the cloud. Using Lean Copilot, we build LLM-based tools that suggest proof steps, complete proof goals, and select relevant premises. Experimental results on the Mathematics in Lean textbook demonstrate the effectiveness of our method compared to existing rule-based proof automation in Lean (AESOP), confirming the significance of having LLMs integrated into the theorem proving workflow in Lean. When assisting humans, Lean Copilot requires only 2.08 manually-entered proof steps on average (3.86 required by AESOP); when automating the theorem proving process, Lean Copilot automates 74.2% proof steps on average, 85% better than AESOP (40.1%). We open source all code and artifacts under a permissive MIT license to facilitate further research.
APA
Song, P., Yang, K. & Anandkumar, A.. (2025). Lean Copilot: Large Language Models as Copilots for Theorem Proving in Lean. Proceedings of the International Conference on Neuro-symbolic Systems, in Proceedings of Machine Learning Research 288:144-169 Available from https://proceedings.mlr.press/v288/song25a.html.

Related Material