Learning Reasoning Strategies in End-to-End Differentiable Proving

Pasquale Minervini, Sebastian Riedel, Pontus Stenetorp, Edward Grefenstette, Tim Rocktäschel
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:6938-6949, 2020.

Abstract

Attempts to render deep learning models interpretable, data-efficient, and robust have seen some success through hybridisation with rule-based systems, for example, in Neural Theorem Provers (NTPs). These neuro-symbolic models can induce interpretable rules and learn representations from data via back-propagation, while providing logical explanations for their predictions. However, they are restricted by their computational complexity, as they need to consider all possible proof paths for explaining a goal, thus rendering them unfit for large-scale applications. We present Conditional Theorem Provers (CTPs), an extension to NTPs that learns an optimal rule selection strategy via gradient-based optimisation. We show that CTPs are scalable and yield state-of-the-art results on the CLUTRR dataset, which tests systematic generalisation of neural models by learning to reason over smaller graphs and evaluating on larger ones. Finally, CTPs show better link prediction results on standard benchmarks in comparison with other neural-symbolic models, while being explainable.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-minervini20a, title = {Learning Reasoning Strategies in End-to-End Differentiable Proving}, author = {Minervini, Pasquale and Riedel, Sebastian and Stenetorp, Pontus and Grefenstette, Edward and Rockt{\"a}schel, Tim}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {6938--6949}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/minervini20a/minervini20a.pdf}, url = {https://proceedings.mlr.press/v119/minervini20a.html}, abstract = {Attempts to render deep learning models interpretable, data-efficient, and robust have seen some success through hybridisation with rule-based systems, for example, in Neural Theorem Provers (NTPs). These neuro-symbolic models can induce interpretable rules and learn representations from data via back-propagation, while providing logical explanations for their predictions. However, they are restricted by their computational complexity, as they need to consider all possible proof paths for explaining a goal, thus rendering them unfit for large-scale applications. We present Conditional Theorem Provers (CTPs), an extension to NTPs that learns an optimal rule selection strategy via gradient-based optimisation. We show that CTPs are scalable and yield state-of-the-art results on the CLUTRR dataset, which tests systematic generalisation of neural models by learning to reason over smaller graphs and evaluating on larger ones. Finally, CTPs show better link prediction results on standard benchmarks in comparison with other neural-symbolic models, while being explainable.} }
Endnote
%0 Conference Paper %T Learning Reasoning Strategies in End-to-End Differentiable Proving %A Pasquale Minervini %A Sebastian Riedel %A Pontus Stenetorp %A Edward Grefenstette %A Tim Rocktäschel %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-minervini20a %I PMLR %P 6938--6949 %U https://proceedings.mlr.press/v119/minervini20a.html %V 119 %X Attempts to render deep learning models interpretable, data-efficient, and robust have seen some success through hybridisation with rule-based systems, for example, in Neural Theorem Provers (NTPs). These neuro-symbolic models can induce interpretable rules and learn representations from data via back-propagation, while providing logical explanations for their predictions. However, they are restricted by their computational complexity, as they need to consider all possible proof paths for explaining a goal, thus rendering them unfit for large-scale applications. We present Conditional Theorem Provers (CTPs), an extension to NTPs that learns an optimal rule selection strategy via gradient-based optimisation. We show that CTPs are scalable and yield state-of-the-art results on the CLUTRR dataset, which tests systematic generalisation of neural models by learning to reason over smaller graphs and evaluating on larger ones. Finally, CTPs show better link prediction results on standard benchmarks in comparison with other neural-symbolic models, while being explainable.
APA
Minervini, P., Riedel, S., Stenetorp, P., Grefenstette, E. & Rocktäschel, T.. (2020). Learning Reasoning Strategies in End-to-End Differentiable Proving. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:6938-6949 Available from https://proceedings.mlr.press/v119/minervini20a.html.

Related Material