On the Hardness of Probabilistic Neurosymbolic Learning

Jaron Maene, Vincent Derkinderen, Luc De Raedt
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:34203-34218, 2024.

Abstract

The limitations of purely neural learning have sparked an interest in probabilistic neurosymbolic models, which combine neural networks with probabilistic logical reasoning. As these neurosymbolic models are trained with gradient descent, we study the complexity of differentiating probabilistic reasoning. We prove that although approximating these gradients is intractable in general, it becomes tractable during training. Furthermore, we introduce WeightME, an unbiased gradient estimator based on model sampling. Under mild assumptions, WeightME approximates the gradient with probabilistic guarantees using a logarithmic number of calls to a SAT solver. Lastly, we evaluate the necessity of these guarantees on the gradient. Our experiments indicate that the existing biased approximations indeed struggle to optimize even when exact solving is still feasible.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-maene24a, title = {On the Hardness of Probabilistic Neurosymbolic Learning}, author = {Maene, Jaron and Derkinderen, Vincent and De Raedt, Luc}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {34203--34218}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/maene24a/maene24a.pdf}, url = {https://proceedings.mlr.press/v235/maene24a.html}, abstract = {The limitations of purely neural learning have sparked an interest in probabilistic neurosymbolic models, which combine neural networks with probabilistic logical reasoning. As these neurosymbolic models are trained with gradient descent, we study the complexity of differentiating probabilistic reasoning. We prove that although approximating these gradients is intractable in general, it becomes tractable during training. Furthermore, we introduce WeightME, an unbiased gradient estimator based on model sampling. Under mild assumptions, WeightME approximates the gradient with probabilistic guarantees using a logarithmic number of calls to a SAT solver. Lastly, we evaluate the necessity of these guarantees on the gradient. Our experiments indicate that the existing biased approximations indeed struggle to optimize even when exact solving is still feasible.} }
Endnote
%0 Conference Paper %T On the Hardness of Probabilistic Neurosymbolic Learning %A Jaron Maene %A Vincent Derkinderen %A Luc De Raedt %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-maene24a %I PMLR %P 34203--34218 %U https://proceedings.mlr.press/v235/maene24a.html %V 235 %X The limitations of purely neural learning have sparked an interest in probabilistic neurosymbolic models, which combine neural networks with probabilistic logical reasoning. As these neurosymbolic models are trained with gradient descent, we study the complexity of differentiating probabilistic reasoning. We prove that although approximating these gradients is intractable in general, it becomes tractable during training. Furthermore, we introduce WeightME, an unbiased gradient estimator based on model sampling. Under mild assumptions, WeightME approximates the gradient with probabilistic guarantees using a logarithmic number of calls to a SAT solver. Lastly, we evaluate the necessity of these guarantees on the gradient. Our experiments indicate that the existing biased approximations indeed struggle to optimize even when exact solving is still feasible.
APA
Maene, J., Derkinderen, V. & De Raedt, L.. (2024). On the Hardness of Probabilistic Neurosymbolic Learning. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:34203-34218 Available from https://proceedings.mlr.press/v235/maene24a.html.

Related Material