Understanding Robustness in Teacher-Student Setting: A New Perspective

Zhuolin Yang, Zhaoxi Chen, Tiffany Cai, Xinyun Chen, Bo Li, Yuandong Tian
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:3313-3321, 2021.

Abstract

Adversarial examples have appeared as a ubiquitous property of machine learning models where bounded adversarial perturbation could mislead the models to make arbitrarily incorrect predictions. Such examples provide a way to assess the robustness of machine learning models as well as a proxy for understanding the model training process. Extensive studies try to explain the existence of adversarial examples and provide ways to improve model robustness (e.g. adversarial training). While they mostly focus on models trained on datasets with predefined labels, we leverage the teacher-student framework and assume a teacher model, or \emph{oracle}, to provide the labels for given instances. We extend \citet{tian2019student} in the case of low-rank input data and show that \emph{student specialization} (trained student neuron is highly correlated with certain teacher neuron at the same layer) still happens within the input subspace, but the teacher and student nodes could \emph{differ wildly} out of the data subspace, which we conjecture leads to adversarial examples. Extensive experiments show that student specialization correlates strongly with model robustness in different scenarios, including student trained via standard training, adversarial training, confidence-calibrated adversarial training, and training with robust feature dataset. Our studies could shed light on the future exploration about adversarial examples, and enhancing model robustness via principled data augmentation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-yang21e, title = { Understanding Robustness in Teacher-Student Setting: A New Perspective }, author = {Yang, Zhuolin and Chen, Zhaoxi and Cai, Tiffany and Chen, Xinyun and Li, Bo and Tian, Yuandong}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {3313--3321}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/yang21e/yang21e.pdf}, url = {https://proceedings.mlr.press/v130/yang21e.html}, abstract = { Adversarial examples have appeared as a ubiquitous property of machine learning models where bounded adversarial perturbation could mislead the models to make arbitrarily incorrect predictions. Such examples provide a way to assess the robustness of machine learning models as well as a proxy for understanding the model training process. Extensive studies try to explain the existence of adversarial examples and provide ways to improve model robustness (e.g. adversarial training). While they mostly focus on models trained on datasets with predefined labels, we leverage the teacher-student framework and assume a teacher model, or \emph{oracle}, to provide the labels for given instances. We extend \citet{tian2019student} in the case of low-rank input data and show that \emph{student specialization} (trained student neuron is highly correlated with certain teacher neuron at the same layer) still happens within the input subspace, but the teacher and student nodes could \emph{differ wildly} out of the data subspace, which we conjecture leads to adversarial examples. Extensive experiments show that student specialization correlates strongly with model robustness in different scenarios, including student trained via standard training, adversarial training, confidence-calibrated adversarial training, and training with robust feature dataset. Our studies could shed light on the future exploration about adversarial examples, and enhancing model robustness via principled data augmentation. } }
Endnote
%0 Conference Paper %T Understanding Robustness in Teacher-Student Setting: A New Perspective %A Zhuolin Yang %A Zhaoxi Chen %A Tiffany Cai %A Xinyun Chen %A Bo Li %A Yuandong Tian %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-yang21e %I PMLR %P 3313--3321 %U https://proceedings.mlr.press/v130/yang21e.html %V 130 %X Adversarial examples have appeared as a ubiquitous property of machine learning models where bounded adversarial perturbation could mislead the models to make arbitrarily incorrect predictions. Such examples provide a way to assess the robustness of machine learning models as well as a proxy for understanding the model training process. Extensive studies try to explain the existence of adversarial examples and provide ways to improve model robustness (e.g. adversarial training). While they mostly focus on models trained on datasets with predefined labels, we leverage the teacher-student framework and assume a teacher model, or \emph{oracle}, to provide the labels for given instances. We extend \citet{tian2019student} in the case of low-rank input data and show that \emph{student specialization} (trained student neuron is highly correlated with certain teacher neuron at the same layer) still happens within the input subspace, but the teacher and student nodes could \emph{differ wildly} out of the data subspace, which we conjecture leads to adversarial examples. Extensive experiments show that student specialization correlates strongly with model robustness in different scenarios, including student trained via standard training, adversarial training, confidence-calibrated adversarial training, and training with robust feature dataset. Our studies could shed light on the future exploration about adversarial examples, and enhancing model robustness via principled data augmentation.
APA
Yang, Z., Chen, Z., Cai, T., Chen, X., Li, B. & Tian, Y.. (2021). Understanding Robustness in Teacher-Student Setting: A New Perspective . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:3313-3321 Available from https://proceedings.mlr.press/v130/yang21e.html.

Related Material