Logic Gate Neural Networks are Good for Verification

Fabian Kresse, Emily Yu, Christoph H. Lampert, Thomas A. Henzinger
Proceedings of the International Conference on Neuro-symbolic Systems, PMLR 288:90-103, 2025.

Abstract

Learning-based systems are increasingly deployed across various domains, yet the complexity of traditional neural networks poses significant challenges for formal verification. Unlike conventional neural networks, learned Logic Gate Networks (LGNs) replace multiplications with Boolean logic gates, yielding a sparse, netlist-like architecture that is inherently more amenable to symbolic verification, while still delivering promising performance. In this paper, we introduce a SAT encoding for verifying global robustness and fairness in LGNs. We evaluate our method on five benchmark datasets, including a newly constructed 5-class variant, and find that LGNs are both verification-friendly and maintain strong predictive performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v288-kresse25a, title = {Logic Gate Neural Networks are Good for Verification}, author = {Kresse, Fabian and Yu, Emily and Lampert, Christoph H. and Henzinger, Thomas A.}, booktitle = {Proceedings of the International Conference on Neuro-symbolic Systems}, pages = {90--103}, year = {2025}, editor = {Pappas, George and Ravikumar, Pradeep and Seshia, Sanjit A.}, volume = {288}, series = {Proceedings of Machine Learning Research}, month = {28--30 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v288/main/assets/kresse25a/kresse25a.pdf}, url = {https://proceedings.mlr.press/v288/kresse25a.html}, abstract = {Learning-based systems are increasingly deployed across various domains, yet the complexity of traditional neural networks poses significant challenges for formal verification. Unlike conventional neural networks, learned Logic Gate Networks (LGNs) replace multiplications with Boolean logic gates, yielding a sparse, netlist-like architecture that is inherently more amenable to symbolic verification, while still delivering promising performance. In this paper, we introduce a SAT encoding for verifying global robustness and fairness in LGNs. We evaluate our method on five benchmark datasets, including a newly constructed 5-class variant, and find that LGNs are both verification-friendly and maintain strong predictive performance.} }
Endnote
%0 Conference Paper %T Logic Gate Neural Networks are Good for Verification %A Fabian Kresse %A Emily Yu %A Christoph H. Lampert %A Thomas A. Henzinger %B Proceedings of the International Conference on Neuro-symbolic Systems %C Proceedings of Machine Learning Research %D 2025 %E George Pappas %E Pradeep Ravikumar %E Sanjit A. Seshia %F pmlr-v288-kresse25a %I PMLR %P 90--103 %U https://proceedings.mlr.press/v288/kresse25a.html %V 288 %X Learning-based systems are increasingly deployed across various domains, yet the complexity of traditional neural networks poses significant challenges for formal verification. Unlike conventional neural networks, learned Logic Gate Networks (LGNs) replace multiplications with Boolean logic gates, yielding a sparse, netlist-like architecture that is inherently more amenable to symbolic verification, while still delivering promising performance. In this paper, we introduce a SAT encoding for verifying global robustness and fairness in LGNs. We evaluate our method on five benchmark datasets, including a newly constructed 5-class variant, and find that LGNs are both verification-friendly and maintain strong predictive performance.
APA
Kresse, F., Yu, E., Lampert, C.H. & Henzinger, T.A.. (2025). Logic Gate Neural Networks are Good for Verification. Proceedings of the International Conference on Neuro-symbolic Systems, in Proceedings of Machine Learning Research 288:90-103 Available from https://proceedings.mlr.press/v288/kresse25a.html.

Related Material