Enhancing Large Language Models with Neurosymbolic Reasoning for Multilingual Tasks

Sina Bagheri Nezhad, Ameeta Agrawal
Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning, PMLR 284:1059-1076, 2025.

Abstract

Large language models (LLMs) often struggle to perform multi-target reasoning in long-context scenarios where relevant information is scattered across extensive documents. To address this challenge, we introduce {NeuroSymbolic Augmented Reasoning (NSAR)}, which combines the benefits of neural and symbolic reasoning during inference. NSAR explicitly extracts symbolic facts from text and generates executable Python code to handle complex reasoning steps. Through extensive experiments across seven languages and diverse context lengths, we demonstrate that NSAR significantly outperforms both a vanilla RAG baseline and advanced prompting strategies in accurately identifying and synthesizing multiple pieces of information. Our results highlight the effectiveness of combining explicit symbolic operations with neural inference for robust, interpretable, and scalable reasoning in multilingual settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v284-nezhad25a, title = {Enhancing Large Language Models with Neurosymbolic Reasoning for Multilingual Tasks}, author = {Nezhad, Sina Bagheri and Agrawal, Ameeta}, booktitle = {Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning}, pages = {1059--1076}, year = {2025}, editor = {H. Gilpin, Leilani and Giunchiglia, Eleonora and Hitzler, Pascal and van Krieken, Emile}, volume = {284}, series = {Proceedings of Machine Learning Research}, month = {08--10 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v284/main/assets/nezhad25a/nezhad25a.pdf}, url = {https://proceedings.mlr.press/v284/nezhad25a.html}, abstract = {Large language models (LLMs) often struggle to perform multi-target reasoning in long-context scenarios where relevant information is scattered across extensive documents. To address this challenge, we introduce {NeuroSymbolic Augmented Reasoning (NSAR)}, which combines the benefits of neural and symbolic reasoning during inference. NSAR explicitly extracts symbolic facts from text and generates executable Python code to handle complex reasoning steps. Through extensive experiments across seven languages and diverse context lengths, we demonstrate that NSAR significantly outperforms both a vanilla RAG baseline and advanced prompting strategies in accurately identifying and synthesizing multiple pieces of information. Our results highlight the effectiveness of combining explicit symbolic operations with neural inference for robust, interpretable, and scalable reasoning in multilingual settings.} }
Endnote
%0 Conference Paper %T Enhancing Large Language Models with Neurosymbolic Reasoning for Multilingual Tasks %A Sina Bagheri Nezhad %A Ameeta Agrawal %B Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning %C Proceedings of Machine Learning Research %D 2025 %E Leilani H. Gilpin %E Eleonora Giunchiglia %E Pascal Hitzler %E Emile van Krieken %F pmlr-v284-nezhad25a %I PMLR %P 1059--1076 %U https://proceedings.mlr.press/v284/nezhad25a.html %V 284 %X Large language models (LLMs) often struggle to perform multi-target reasoning in long-context scenarios where relevant information is scattered across extensive documents. To address this challenge, we introduce {NeuroSymbolic Augmented Reasoning (NSAR)}, which combines the benefits of neural and symbolic reasoning during inference. NSAR explicitly extracts symbolic facts from text and generates executable Python code to handle complex reasoning steps. Through extensive experiments across seven languages and diverse context lengths, we demonstrate that NSAR significantly outperforms both a vanilla RAG baseline and advanced prompting strategies in accurately identifying and synthesizing multiple pieces of information. Our results highlight the effectiveness of combining explicit symbolic operations with neural inference for robust, interpretable, and scalable reasoning in multilingual settings.
APA
Nezhad, S.B. & Agrawal, A.. (2025). Enhancing Large Language Models with Neurosymbolic Reasoning for Multilingual Tasks. Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning, in Proceedings of Machine Learning Research 284:1059-1076 Available from https://proceedings.mlr.press/v284/nezhad25a.html.

Related Material