Measuring Representational Robustness of Neural Networks Through Shared Invariances

Vedant Nanda, Till Speicher, Camila Kolling, John P Dickerson, Krishna Gummadi, Adrian Weller
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:16368-16382, 2022.

Abstract

A major challenge in studying robustness in deep learning is defining the set of “meaningless” perturbations to which a given Neural Network (NN) should be invariant. Most work on robustness implicitly uses a human as the reference model to define such perturbations. Our work offers a new view on robustness by using another reference NN to define the set of perturbations a given NN should be invariant to, thus generalizing the reliance on a reference “human NN” to any NN. This makes measuring robustness equivalent to measuring the extent to which two NNs share invariances. We propose a measure called \stir, which faithfully captures the extent to which two NNs share invariances. \stir re-purposes existing representation similarity measures to make them suitable for measuring shared invariances. Using our measure, we are able to gain insights about how shared invariances vary with changes in weight initialization, architecture, loss functions, and training dataset. Our implementation is available at: \url{https://github.com/nvedant07/STIR}.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-nanda22a, title = {Measuring Representational Robustness of Neural Networks Through Shared Invariances}, author = {Nanda, Vedant and Speicher, Till and Kolling, Camila and Dickerson, John P and Gummadi, Krishna and Weller, Adrian}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {16368--16382}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/nanda22a/nanda22a.pdf}, url = {https://proceedings.mlr.press/v162/nanda22a.html}, abstract = {A major challenge in studying robustness in deep learning is defining the set of “meaningless” perturbations to which a given Neural Network (NN) should be invariant. Most work on robustness implicitly uses a human as the reference model to define such perturbations. Our work offers a new view on robustness by using another reference NN to define the set of perturbations a given NN should be invariant to, thus generalizing the reliance on a reference “human NN” to any NN. This makes measuring robustness equivalent to measuring the extent to which two NNs share invariances. We propose a measure called \stir, which faithfully captures the extent to which two NNs share invariances. \stir re-purposes existing representation similarity measures to make them suitable for measuring shared invariances. Using our measure, we are able to gain insights about how shared invariances vary with changes in weight initialization, architecture, loss functions, and training dataset. Our implementation is available at: \url{https://github.com/nvedant07/STIR}.} }
Endnote
%0 Conference Paper %T Measuring Representational Robustness of Neural Networks Through Shared Invariances %A Vedant Nanda %A Till Speicher %A Camila Kolling %A John P Dickerson %A Krishna Gummadi %A Adrian Weller %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-nanda22a %I PMLR %P 16368--16382 %U https://proceedings.mlr.press/v162/nanda22a.html %V 162 %X A major challenge in studying robustness in deep learning is defining the set of “meaningless” perturbations to which a given Neural Network (NN) should be invariant. Most work on robustness implicitly uses a human as the reference model to define such perturbations. Our work offers a new view on robustness by using another reference NN to define the set of perturbations a given NN should be invariant to, thus generalizing the reliance on a reference “human NN” to any NN. This makes measuring robustness equivalent to measuring the extent to which two NNs share invariances. We propose a measure called \stir, which faithfully captures the extent to which two NNs share invariances. \stir re-purposes existing representation similarity measures to make them suitable for measuring shared invariances. Using our measure, we are able to gain insights about how shared invariances vary with changes in weight initialization, architecture, loss functions, and training dataset. Our implementation is available at: \url{https://github.com/nvedant07/STIR}.
APA
Nanda, V., Speicher, T., Kolling, C., Dickerson, J.P., Gummadi, K. & Weller, A.. (2022). Measuring Representational Robustness of Neural Networks Through Shared Invariances. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:16368-16382 Available from https://proceedings.mlr.press/v162/nanda22a.html.

Related Material