State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations

Alex Lamb, Jonathan Binas, Anirudh Goyal, Sandeep Subramanian, Ioannis Mitliagkas, Yoshua Bengio, Michael Mozer
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:3622-3631, 2019.

Abstract

Machine learning promises methods that generalize well from finite labeled data. However, the brittleness of existing neural net approaches is revealed by notable failures, such as the existence of adversarial examples that are misclassified despite being nearly identical to a training example, or the inability of recurrent sequence-processing nets to stay on track without teacher forcing. We introduce a method, which we refer to as _state reification_, that involves modeling the distribution of hidden states over the training data and then projecting hidden states observed during testing toward this distribution. Our intuition is that if the network can remain in a familiar manifold of hidden space, subsequent layers of the net should be well trained to respond appropriately. We show that this state-reification method helps neural nets to generalize better, especially when labeled data are sparse, and also helps overcome the challenge of achieving robust generalization with adversarial training.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-lamb19a, title = {State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations}, author = {Lamb, Alex and Binas, Jonathan and Goyal, Anirudh and Subramanian, Sandeep and Mitliagkas, Ioannis and Bengio, Yoshua and Mozer, Michael}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {3622--3631}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/lamb19a/lamb19a.pdf}, url = {https://proceedings.mlr.press/v97/lamb19a.html}, abstract = {Machine learning promises methods that generalize well from finite labeled data. However, the brittleness of existing neural net approaches is revealed by notable failures, such as the existence of adversarial examples that are misclassified despite being nearly identical to a training example, or the inability of recurrent sequence-processing nets to stay on track without teacher forcing. We introduce a method, which we refer to as _state reification_, that involves modeling the distribution of hidden states over the training data and then projecting hidden states observed during testing toward this distribution. Our intuition is that if the network can remain in a familiar manifold of hidden space, subsequent layers of the net should be well trained to respond appropriately. We show that this state-reification method helps neural nets to generalize better, especially when labeled data are sparse, and also helps overcome the challenge of achieving robust generalization with adversarial training.} }
Endnote
%0 Conference Paper %T State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations %A Alex Lamb %A Jonathan Binas %A Anirudh Goyal %A Sandeep Subramanian %A Ioannis Mitliagkas %A Yoshua Bengio %A Michael Mozer %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-lamb19a %I PMLR %P 3622--3631 %U https://proceedings.mlr.press/v97/lamb19a.html %V 97 %X Machine learning promises methods that generalize well from finite labeled data. However, the brittleness of existing neural net approaches is revealed by notable failures, such as the existence of adversarial examples that are misclassified despite being nearly identical to a training example, or the inability of recurrent sequence-processing nets to stay on track without teacher forcing. We introduce a method, which we refer to as _state reification_, that involves modeling the distribution of hidden states over the training data and then projecting hidden states observed during testing toward this distribution. Our intuition is that if the network can remain in a familiar manifold of hidden space, subsequent layers of the net should be well trained to respond appropriately. We show that this state-reification method helps neural nets to generalize better, especially when labeled data are sparse, and also helps overcome the challenge of achieving robust generalization with adversarial training.
APA
Lamb, A., Binas, J., Goyal, A., Subramanian, S., Mitliagkas, I., Bengio, Y. & Mozer, M.. (2019). State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:3622-3631 Available from https://proceedings.mlr.press/v97/lamb19a.html.

Related Material