The Complexity of Explaining Neural Networks Through (group) Invariants

Danielle Ensign, Scott Neville, Arnab Paul, Suresh Venkatasubramanian
Proceedings of the 28th International Conference on Algorithmic Learning Theory, PMLR 76:341-359, 2017.

Abstract

Ever since the work of Minsky and Papert, it has been thought that neural networks derive their effectiveness by finding representations of the data that are invariant with respect to the task. In other words, the representations eliminate components of the data that vary in a way that is irrelevant. These invariants are naturally expressed with respect to group operations, and thus an understanding of these groups is key to explaining the effectiveness of the neural network. Moreover, a line of work in deep learning has shown that explicit knowledge of group invariants can lead to more effective training results.

In this paper, we investigate the difficulty of discovering anything about these implicit invariants. Unfortunately, our main results are negative: we show that a variety of questions around investigating invariant representations are NP-hard, even in approximate settings. Moreover, these results do not depend on the kind of architecture used: in fact, our results follow as soon as the network architecture is powerful enough to be universal. The key idea behind our results is that if we can find the symmetries of a problem then we can solve it.

Cite this Paper


BibTeX
@InProceedings{pmlr-v76-ensign17a, title = {The Complexity of Explaining Neural Networks Through (group) Invariants}, author = {Ensign, Danielle and Neville, Scott and Paul, Arnab and Venkatasubramanian, Suresh}, booktitle = {Proceedings of the 28th International Conference on Algorithmic Learning Theory}, pages = {341--359}, year = {2017}, editor = {Hanneke, Steve and Reyzin, Lev}, volume = {76}, series = {Proceedings of Machine Learning Research}, month = {15--17 Oct}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v76/ensign17a/ensign17a.pdf}, url = {https://proceedings.mlr.press/v76/ensign17a.html}, abstract = {Ever since the work of Minsky and Papert, it has been thought that neural networks derive their effectiveness by finding representations of the data that are invariant with respect to the task. In other words, the representations eliminate components of the data that vary in a way that is irrelevant. These invariants are naturally expressed with respect to group operations, and thus an understanding of these groups is key to explaining the effectiveness of the neural network. Moreover, a line of work in deep learning has shown that explicit knowledge of group invariants can lead to more effective training results.

In this paper, we investigate the difficulty of discovering anything about these implicit invariants. Unfortunately, our main results are negative: we show that a variety of questions around investigating invariant representations are NP-hard, even in approximate settings. Moreover, these results do not depend on the kind of architecture used: in fact, our results follow as soon as the network architecture is powerful enough to be universal. The key idea behind our results is that if we can find the symmetries of a problem then we can solve it.} }
Endnote
%0 Conference Paper %T The Complexity of Explaining Neural Networks Through (group) Invariants %A Danielle Ensign %A Scott Neville %A Arnab Paul %A Suresh Venkatasubramanian %B Proceedings of the 28th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2017 %E Steve Hanneke %E Lev Reyzin %F pmlr-v76-ensign17a %I PMLR %P 341--359 %U https://proceedings.mlr.press/v76/ensign17a.html %V 76 %X Ever since the work of Minsky and Papert, it has been thought that neural networks derive their effectiveness by finding representations of the data that are invariant with respect to the task. In other words, the representations eliminate components of the data that vary in a way that is irrelevant. These invariants are naturally expressed with respect to group operations, and thus an understanding of these groups is key to explaining the effectiveness of the neural network. Moreover, a line of work in deep learning has shown that explicit knowledge of group invariants can lead to more effective training results.

In this paper, we investigate the difficulty of discovering anything about these implicit invariants. Unfortunately, our main results are negative: we show that a variety of questions around investigating invariant representations are NP-hard, even in approximate settings. Moreover, these results do not depend on the kind of architecture used: in fact, our results follow as soon as the network architecture is powerful enough to be universal. The key idea behind our results is that if we can find the symmetries of a problem then we can solve it.
APA
Ensign, D., Neville, S., Paul, A. & Venkatasubramanian, S.. (2017). The Complexity of Explaining Neural Networks Through (group) Invariants. Proceedings of the 28th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 76:341-359 Available from https://proceedings.mlr.press/v76/ensign17a.html.

Related Material