Symbolic Guidance for Constructivist Learning by Neural Model
Proceedings of the Third International Workshop on Self-Supervised Learning, PMLR 192:63-76, 2022.
Deep learning has made impressive strides but still lacks key concepts necessary to truly reason and act in the world. In parallel, symbolic learning systems have shown success at certain types of abstract reasoning, as demonstrated in the Abstraction and Reasoning Challenge by Kaggle. Yet, these symbolic learners are challenged when generalizing to data from the analog world. This paper will present and evaluate ideas for using symbolic learning concepts to guide learning of a neural network in a constructivist way. We aim to show how a neural network with internal feedback can be used - somewhat like the brain - to suggest the proper actions to take and predict the results of those actions. In other words, the system will create an internal model of the world on which it can reason. The neurosymbolic system we consider is inspired by the symbolic learning system from Gary Dresher and neural network cortical columns as discussed by Jeff Hawkins. The hybrid system aims to create a synthesis which can generalize on real-world concepts while also quickly learning from few examples as the world changes and prior experience is found to be inaccurate.