Sound and Complete Neurosymbolic Reasoning with LLM-Grounded Interpretations

Bradley P. Allen, Prateek Chhikara, Thomas Macaulay Ferguson, Filip Ilievski, Paul Groth
Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning, PMLR 284:392-419, 2025.

Abstract

Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but they exhibit problems with logical consistency in the output they generate. How can we harness LLMs’ broad-coverage parametric knowledge in formal reasoning despite their inconsistency? We present a method for directly integrating an LLM into the interpretation function of the formal semantics for a paraconsistent logic. We provide experimental evidence for the feasibility of the method by evaluating the function using datasets created from several short-form factuality benchmarks. Unlike prior work, our method offers a theoretical framework for neurosymbolic reasoning that leverages an LLM’s knowledge while preserving the underlying logic’s soundness and completeness properties.

Cite this Paper


BibTeX
@InProceedings{pmlr-v284-allen25a, title = {Sound and Complete Neurosymbolic Reasoning with LLM-Grounded Interpretations}, author = {Allen, Bradley P. and Chhikara, Prateek and Ferguson, Thomas Macaulay and Ilievski, Filip and Groth, Paul}, booktitle = {Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning}, pages = {392--419}, year = {2025}, editor = {H. Gilpin, Leilani and Giunchiglia, Eleonora and Hitzler, Pascal and van Krieken, Emile}, volume = {284}, series = {Proceedings of Machine Learning Research}, month = {08--10 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v284/main/assets/allen25a/allen25a.pdf}, url = {https://proceedings.mlr.press/v284/allen25a.html}, abstract = {Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but they exhibit problems with logical consistency in the output they generate. How can we harness LLMs’ broad-coverage parametric knowledge in formal reasoning despite their inconsistency? We present a method for directly integrating an LLM into the interpretation function of the formal semantics for a paraconsistent logic. We provide experimental evidence for the feasibility of the method by evaluating the function using datasets created from several short-form factuality benchmarks. Unlike prior work, our method offers a theoretical framework for neurosymbolic reasoning that leverages an LLM’s knowledge while preserving the underlying logic’s soundness and completeness properties.} }
Endnote
%0 Conference Paper %T Sound and Complete Neurosymbolic Reasoning with LLM-Grounded Interpretations %A Bradley P. Allen %A Prateek Chhikara %A Thomas Macaulay Ferguson %A Filip Ilievski %A Paul Groth %B Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning %C Proceedings of Machine Learning Research %D 2025 %E Leilani H. Gilpin %E Eleonora Giunchiglia %E Pascal Hitzler %E Emile van Krieken %F pmlr-v284-allen25a %I PMLR %P 392--419 %U https://proceedings.mlr.press/v284/allen25a.html %V 284 %X Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but they exhibit problems with logical consistency in the output they generate. How can we harness LLMs’ broad-coverage parametric knowledge in formal reasoning despite their inconsistency? We present a method for directly integrating an LLM into the interpretation function of the formal semantics for a paraconsistent logic. We provide experimental evidence for the feasibility of the method by evaluating the function using datasets created from several short-form factuality benchmarks. Unlike prior work, our method offers a theoretical framework for neurosymbolic reasoning that leverages an LLM’s knowledge while preserving the underlying logic’s soundness and completeness properties.
APA
Allen, B.P., Chhikara, P., Ferguson, T.M., Ilievski, F. & Groth, P.. (2025). Sound and Complete Neurosymbolic Reasoning with LLM-Grounded Interpretations. Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning, in Proceedings of Machine Learning Research 284:392-419 Available from https://proceedings.mlr.press/v284/allen25a.html.

Related Material