MathConstruct: Challenging LLM Reasoning with Constructive Proofs

Mislav Balunovic, Jasper Dekoninck, Nikola Jovanović, Ivo Petrov, Martin Vechev
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:2742-2769, 2025.

Abstract

While Large Language Models (LLMs) demonstrate impressive performance in mathematics, existing math benchmarks come with significant limitations. Many focus on problems with fixed ground-truth answers, and are often saturated due to problem simplicity or the viability of guessing or memorization. Crucially, they capture only a narrow subset of relevant math problems. To address this research gap, we introduce MathConstruct, a new benchmark of 127 challenging problems sourced from various math competitions, which targets constructive proofs, a widely encountered problem type requiring the construction of mathematical objects with specific properties. These proofs are particularly suitable for LLM evaluation, as solution correctness can be easily verified. Our automated verifiers also enable MathConstruct to generate problem variations, used to evaluate robustness. State-of-the-art LLMs solve only 41% of MathConstruct problems, highlighting its complexity and importance for LLM evaluation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-balunovic25a, title = {{M}ath{C}onstruct: Challenging {LLM} Reasoning with Constructive Proofs}, author = {Balunovic, Mislav and Dekoninck, Jasper and Jovanovi\'{c}, Nikola and Petrov, Ivo and Vechev, Martin}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {2742--2769}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/balunovic25a/balunovic25a.pdf}, url = {https://proceedings.mlr.press/v267/balunovic25a.html}, abstract = {While Large Language Models (LLMs) demonstrate impressive performance in mathematics, existing math benchmarks come with significant limitations. Many focus on problems with fixed ground-truth answers, and are often saturated due to problem simplicity or the viability of guessing or memorization. Crucially, they capture only a narrow subset of relevant math problems. To address this research gap, we introduce MathConstruct, a new benchmark of 127 challenging problems sourced from various math competitions, which targets constructive proofs, a widely encountered problem type requiring the construction of mathematical objects with specific properties. These proofs are particularly suitable for LLM evaluation, as solution correctness can be easily verified. Our automated verifiers also enable MathConstruct to generate problem variations, used to evaluate robustness. State-of-the-art LLMs solve only 41% of MathConstruct problems, highlighting its complexity and importance for LLM evaluation.} }
Endnote
%0 Conference Paper %T MathConstruct: Challenging LLM Reasoning with Constructive Proofs %A Mislav Balunovic %A Jasper Dekoninck %A Nikola Jovanović %A Ivo Petrov %A Martin Vechev %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-balunovic25a %I PMLR %P 2742--2769 %U https://proceedings.mlr.press/v267/balunovic25a.html %V 267 %X While Large Language Models (LLMs) demonstrate impressive performance in mathematics, existing math benchmarks come with significant limitations. Many focus on problems with fixed ground-truth answers, and are often saturated due to problem simplicity or the viability of guessing or memorization. Crucially, they capture only a narrow subset of relevant math problems. To address this research gap, we introduce MathConstruct, a new benchmark of 127 challenging problems sourced from various math competitions, which targets constructive proofs, a widely encountered problem type requiring the construction of mathematical objects with specific properties. These proofs are particularly suitable for LLM evaluation, as solution correctness can be easily verified. Our automated verifiers also enable MathConstruct to generate problem variations, used to evaluate robustness. State-of-the-art LLMs solve only 41% of MathConstruct problems, highlighting its complexity and importance for LLM evaluation.
APA
Balunovic, M., Dekoninck, J., Jovanović, N., Petrov, I. & Vechev, M.. (2025). MathConstruct: Challenging LLM Reasoning with Constructive Proofs. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:2742-2769 Available from https://proceedings.mlr.press/v267/balunovic25a.html.

Related Material