Neurosymbolic Learning in Structured Probability Spaces: A Case Study

Ole Fenske, Sebastian Bader, Thomas Kirste
Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning, PMLR 284:938-956, 2025.

Abstract

This paper examines the impact of neurosymbolic learning on sequence analysis in Structured Probability Spaces (SPS), comparing its effectiveness against a purely neural approach. Sequence analysis in SPS is challenging due to the combinatorial explosion of states and the difficulty of obtaining sufficient annotated training samples. Additionally, in SPS, the set of realizations with non-zero support is often a scattered, non-trivial subset of the Cartesian product of variables, adding complexity to learning and inference. The problem of sequence analysis in SPS emerges, for example, in reconstructing the activities of goal-directed agents from noisy and ambiguous sensor data. We explore the potential of neurosymbolic methods, which integrate symbolic background knowledge with neural learning, to constrain the hypothesis space and improve learning efficiency. Specifically, we conduct a simulation study in human activity recognition using DeepProbLog as a representative for neurosymbolic learning. Our results demonstrate that incorporating symbolic knowledge improves sample efficiency, generalization, and zero-shot learning, compared to a purely neural approach. Furthermore, we show that neurosymbolic models maintain robust performance under data scarcity while offering enhanced interpretability and stability. These findings suggest that neurosymbolic learning provides a promising foundation for sequence analysis in complex, structured domains, where purely neural approaches struggle with insufficient training data and limited generalization ability.

Cite this Paper


BibTeX
@InProceedings{pmlr-v284-fenske25a, title = {Neurosymbolic Learning in Structured Probability Spaces: A Case Study}, author = {Fenske, Ole and Bader, Sebastian and Kirste, Thomas}, booktitle = {Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning}, pages = {938--956}, year = {2025}, editor = {H. Gilpin, Leilani and Giunchiglia, Eleonora and Hitzler, Pascal and van Krieken, Emile}, volume = {284}, series = {Proceedings of Machine Learning Research}, month = {08--10 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v284/main/assets/fenske25a/fenske25a.pdf}, url = {https://proceedings.mlr.press/v284/fenske25a.html}, abstract = {This paper examines the impact of neurosymbolic learning on sequence analysis in Structured Probability Spaces (SPS), comparing its effectiveness against a purely neural approach. Sequence analysis in SPS is challenging due to the combinatorial explosion of states and the difficulty of obtaining sufficient annotated training samples. Additionally, in SPS, the set of realizations with non-zero support is often a scattered, non-trivial subset of the Cartesian product of variables, adding complexity to learning and inference. The problem of sequence analysis in SPS emerges, for example, in reconstructing the activities of goal-directed agents from noisy and ambiguous sensor data. We explore the potential of neurosymbolic methods, which integrate symbolic background knowledge with neural learning, to constrain the hypothesis space and improve learning efficiency. Specifically, we conduct a simulation study in human activity recognition using DeepProbLog as a representative for neurosymbolic learning. Our results demonstrate that incorporating symbolic knowledge improves sample efficiency, generalization, and zero-shot learning, compared to a purely neural approach. Furthermore, we show that neurosymbolic models maintain robust performance under data scarcity while offering enhanced interpretability and stability. These findings suggest that neurosymbolic learning provides a promising foundation for sequence analysis in complex, structured domains, where purely neural approaches struggle with insufficient training data and limited generalization ability.} }
Endnote
%0 Conference Paper %T Neurosymbolic Learning in Structured Probability Spaces: A Case Study %A Ole Fenske %A Sebastian Bader %A Thomas Kirste %B Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning %C Proceedings of Machine Learning Research %D 2025 %E Leilani H. Gilpin %E Eleonora Giunchiglia %E Pascal Hitzler %E Emile van Krieken %F pmlr-v284-fenske25a %I PMLR %P 938--956 %U https://proceedings.mlr.press/v284/fenske25a.html %V 284 %X This paper examines the impact of neurosymbolic learning on sequence analysis in Structured Probability Spaces (SPS), comparing its effectiveness against a purely neural approach. Sequence analysis in SPS is challenging due to the combinatorial explosion of states and the difficulty of obtaining sufficient annotated training samples. Additionally, in SPS, the set of realizations with non-zero support is often a scattered, non-trivial subset of the Cartesian product of variables, adding complexity to learning and inference. The problem of sequence analysis in SPS emerges, for example, in reconstructing the activities of goal-directed agents from noisy and ambiguous sensor data. We explore the potential of neurosymbolic methods, which integrate symbolic background knowledge with neural learning, to constrain the hypothesis space and improve learning efficiency. Specifically, we conduct a simulation study in human activity recognition using DeepProbLog as a representative for neurosymbolic learning. Our results demonstrate that incorporating symbolic knowledge improves sample efficiency, generalization, and zero-shot learning, compared to a purely neural approach. Furthermore, we show that neurosymbolic models maintain robust performance under data scarcity while offering enhanced interpretability and stability. These findings suggest that neurosymbolic learning provides a promising foundation for sequence analysis in complex, structured domains, where purely neural approaches struggle with insufficient training data and limited generalization ability.
APA
Fenske, O., Bader, S. & Kirste, T.. (2025). Neurosymbolic Learning in Structured Probability Spaces: A Case Study. Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning, in Proceedings of Machine Learning Research 284:938-956 Available from https://proceedings.mlr.press/v284/fenske25a.html.

Related Material