Bayesian Neural Networks Avoid Encoding Complex and Perturbation-Sensitive Concepts

Qihan Ren, Huiqi Deng, Yunuo Chen, Siyu Lou, Quanshi Zhang
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:28889-28913, 2023.

Abstract

In this paper, we focus on mean-field variational Bayesian Neural Networks (BNNs) and explore the representation capacity of such BNNs by investigating which types of concepts are less likely to be encoded by the BNN. It has been observed and studied that a relatively small set of interactive concepts usually emerge in the knowledge representation of a sufficiently-trained neural network, and such concepts can faithfully explain the network output. Based on this, our study proves that compared to standard deep neural networks (DNNs), it is less likely for BNNs to encode complex concepts. Experiments verify our theoretical proofs. Note that the tendency to encode less complex concepts does not necessarily imply weak representation power, considering that complex concepts exhibit low generalization power and high adversarial vulnerability. The code is available at https://github.com/sjtu-xai-lab/BNN-concepts.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-ren23a, title = {{B}ayesian Neural Networks Avoid Encoding Complex and Perturbation-Sensitive Concepts}, author = {Ren, Qihan and Deng, Huiqi and Chen, Yunuo and Lou, Siyu and Zhang, Quanshi}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {28889--28913}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/ren23a/ren23a.pdf}, url = {https://proceedings.mlr.press/v202/ren23a.html}, abstract = {In this paper, we focus on mean-field variational Bayesian Neural Networks (BNNs) and explore the representation capacity of such BNNs by investigating which types of concepts are less likely to be encoded by the BNN. It has been observed and studied that a relatively small set of interactive concepts usually emerge in the knowledge representation of a sufficiently-trained neural network, and such concepts can faithfully explain the network output. Based on this, our study proves that compared to standard deep neural networks (DNNs), it is less likely for BNNs to encode complex concepts. Experiments verify our theoretical proofs. Note that the tendency to encode less complex concepts does not necessarily imply weak representation power, considering that complex concepts exhibit low generalization power and high adversarial vulnerability. The code is available at https://github.com/sjtu-xai-lab/BNN-concepts.} }
Endnote
%0 Conference Paper %T Bayesian Neural Networks Avoid Encoding Complex and Perturbation-Sensitive Concepts %A Qihan Ren %A Huiqi Deng %A Yunuo Chen %A Siyu Lou %A Quanshi Zhang %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-ren23a %I PMLR %P 28889--28913 %U https://proceedings.mlr.press/v202/ren23a.html %V 202 %X In this paper, we focus on mean-field variational Bayesian Neural Networks (BNNs) and explore the representation capacity of such BNNs by investigating which types of concepts are less likely to be encoded by the BNN. It has been observed and studied that a relatively small set of interactive concepts usually emerge in the knowledge representation of a sufficiently-trained neural network, and such concepts can faithfully explain the network output. Based on this, our study proves that compared to standard deep neural networks (DNNs), it is less likely for BNNs to encode complex concepts. Experiments verify our theoretical proofs. Note that the tendency to encode less complex concepts does not necessarily imply weak representation power, considering that complex concepts exhibit low generalization power and high adversarial vulnerability. The code is available at https://github.com/sjtu-xai-lab/BNN-concepts.
APA
Ren, Q., Deng, H., Chen, Y., Lou, S. & Zhang, Q.. (2023). Bayesian Neural Networks Avoid Encoding Complex and Perturbation-Sensitive Concepts. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:28889-28913 Available from https://proceedings.mlr.press/v202/ren23a.html.

Related Material