[edit]
Learning Subject to Constraints via Abstract Gradient Descent
Proceedings of the International Conference on Neuro-symbolic Systems, PMLR 288:214-230, 2025.
Abstract
Deep learning has made significant advancements and has been successfully used in different areas. However, current deep learning models are based on theories of probability and statistics, thus suffer from adhering to constraints and ensuring assurances. In contrast, symbolic methods inherently own rigidity but with low efficiency and high cost. In this paper, we propose a novel symbolic learning architecture which can deal with data fitting and logic satisfaction uniformly. The key idea is to treat logic formulas as discrete functions that can be optimized through Abstract Gradient Descent, a discrete optimization method based on backward abstract interpretation. The efficiency of our approach is illustrated by training a symbolic neural network with 8,542 parameters that can accurately recognize 70,000 handwritten digits. Experiments indicate that the trained model not only fits the data well but also adheres to key properties such as robustness, invariance to zooming, translation, resistance to noise background, and so on.