Learning Subject to Constraints via Abstract Gradient Descent

Shiwen Yu, Wanwei Liu, Zengyu Liu, Liqian Chen, Ting Wang, Naijun Zhan, Ji Wang
Proceedings of the International Conference on Neuro-symbolic Systems, PMLR 288:214-230, 2025.

Abstract

Deep learning has made significant advancements and has been successfully used in different areas. However, current deep learning models are based on theories of probability and statistics, thus suffer from adhering to constraints and ensuring assurances. In contrast, symbolic methods inherently own rigidity but with low efficiency and high cost. In this paper, we propose a novel symbolic learning architecture which can deal with data fitting and logic satisfaction uniformly. The key idea is to treat logic formulas as discrete functions that can be optimized through Abstract Gradient Descent, a discrete optimization method based on backward abstract interpretation. The efficiency of our approach is illustrated by training a symbolic neural network with 8,542 parameters that can accurately recognize 70,000 handwritten digits. Experiments indicate that the trained model not only fits the data well but also adheres to key properties such as robustness, invariance to zooming, translation, resistance to noise background, and so on.

Cite this Paper


BibTeX
@InProceedings{pmlr-v288-yu25a, title = {Learning Subject to Constraints via Abstract Gradient Descent}, author = {Yu, Shiwen and Liu, Wanwei and Liu, Zengyu and Chen, Liqian and Wang, Ting and Zhan, Naijun and Wang, Ji}, booktitle = {Proceedings of the International Conference on Neuro-symbolic Systems}, pages = {214--230}, year = {2025}, editor = {Pappas, George and Ravikumar, Pradeep and Seshia, Sanjit A.}, volume = {288}, series = {Proceedings of Machine Learning Research}, month = {28--30 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v288/main/assets/yu25a/yu25a.pdf}, url = {https://proceedings.mlr.press/v288/yu25a.html}, abstract = {Deep learning has made significant advancements and has been successfully used in different areas. However, current deep learning models are based on theories of probability and statistics, thus suffer from adhering to constraints and ensuring assurances. In contrast, symbolic methods inherently own rigidity but with low efficiency and high cost. In this paper, we propose a novel symbolic learning architecture which can deal with data fitting and logic satisfaction uniformly. The key idea is to treat logic formulas as discrete functions that can be optimized through Abstract Gradient Descent, a discrete optimization method based on backward abstract interpretation. The efficiency of our approach is illustrated by training a symbolic neural network with 8,542 parameters that can accurately recognize 70,000 handwritten digits. Experiments indicate that the trained model not only fits the data well but also adheres to key properties such as robustness, invariance to zooming, translation, resistance to noise background, and so on.} }
Endnote
%0 Conference Paper %T Learning Subject to Constraints via Abstract Gradient Descent %A Shiwen Yu %A Wanwei Liu %A Zengyu Liu %A Liqian Chen %A Ting Wang %A Naijun Zhan %A Ji Wang %B Proceedings of the International Conference on Neuro-symbolic Systems %C Proceedings of Machine Learning Research %D 2025 %E George Pappas %E Pradeep Ravikumar %E Sanjit A. Seshia %F pmlr-v288-yu25a %I PMLR %P 214--230 %U https://proceedings.mlr.press/v288/yu25a.html %V 288 %X Deep learning has made significant advancements and has been successfully used in different areas. However, current deep learning models are based on theories of probability and statistics, thus suffer from adhering to constraints and ensuring assurances. In contrast, symbolic methods inherently own rigidity but with low efficiency and high cost. In this paper, we propose a novel symbolic learning architecture which can deal with data fitting and logic satisfaction uniformly. The key idea is to treat logic formulas as discrete functions that can be optimized through Abstract Gradient Descent, a discrete optimization method based on backward abstract interpretation. The efficiency of our approach is illustrated by training a symbolic neural network with 8,542 parameters that can accurately recognize 70,000 handwritten digits. Experiments indicate that the trained model not only fits the data well but also adheres to key properties such as robustness, invariance to zooming, translation, resistance to noise background, and so on.
APA
Yu, S., Liu, W., Liu, Z., Chen, L., Wang, T., Zhan, N. & Wang, J.. (2025). Learning Subject to Constraints via Abstract Gradient Descent. Proceedings of the International Conference on Neuro-symbolic Systems, in Proceedings of Machine Learning Research 288:214-230 Available from https://proceedings.mlr.press/v288/yu25a.html.

Related Material