Investigating Generalization by Controlling Normalized Margin

Alexander R Farhang, Jeremy D Bernstein, Kushal Tirumala, Yang Liu, Yisong Yue
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:6324-6336, 2022.

Abstract

Weight norm $\|w\|$ and margin $\gamma$ participate in learning theory via the normalized margin $\gamma/\|w\|$. Since standard neural net optimizers do not control normalized margin, it is hard to test whether this quantity causally relates to generalization. This paper designs a series of experimental studies that explicitly control normalized margin and thereby tackle two central questions. First: does normalized margin always have a causal effect on generalization? The paper finds that no—networks can be produced where normalized margin has seemingly no relationship with generalization, counter to the theory of Bartlett et al. (2017). Second: does normalized margin ever have a causal effect on generalization? The paper finds that yes—in a standard training setup, test performance closely tracks normalized margin. The paper suggests a Gaussian process model as a promising explanation for this behavior.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-farhang22a, title = {Investigating Generalization by Controlling Normalized Margin}, author = {Farhang, Alexander R and Bernstein, Jeremy D and Tirumala, Kushal and Liu, Yang and Yue, Yisong}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {6324--6336}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/farhang22a/farhang22a.pdf}, url = {https://proceedings.mlr.press/v162/farhang22a.html}, abstract = {Weight norm $\|w\|$ and margin $\gamma$ participate in learning theory via the normalized margin $\gamma/\|w\|$. Since standard neural net optimizers do not control normalized margin, it is hard to test whether this quantity causally relates to generalization. This paper designs a series of experimental studies that explicitly control normalized margin and thereby tackle two central questions. First: does normalized margin always have a causal effect on generalization? The paper finds that no—networks can be produced where normalized margin has seemingly no relationship with generalization, counter to the theory of Bartlett et al. (2017). Second: does normalized margin ever have a causal effect on generalization? The paper finds that yes—in a standard training setup, test performance closely tracks normalized margin. The paper suggests a Gaussian process model as a promising explanation for this behavior.} }
Endnote
%0 Conference Paper %T Investigating Generalization by Controlling Normalized Margin %A Alexander R Farhang %A Jeremy D Bernstein %A Kushal Tirumala %A Yang Liu %A Yisong Yue %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-farhang22a %I PMLR %P 6324--6336 %U https://proceedings.mlr.press/v162/farhang22a.html %V 162 %X Weight norm $\|w\|$ and margin $\gamma$ participate in learning theory via the normalized margin $\gamma/\|w\|$. Since standard neural net optimizers do not control normalized margin, it is hard to test whether this quantity causally relates to generalization. This paper designs a series of experimental studies that explicitly control normalized margin and thereby tackle two central questions. First: does normalized margin always have a causal effect on generalization? The paper finds that no—networks can be produced where normalized margin has seemingly no relationship with generalization, counter to the theory of Bartlett et al. (2017). Second: does normalized margin ever have a causal effect on generalization? The paper finds that yes—in a standard training setup, test performance closely tracks normalized margin. The paper suggests a Gaussian process model as a promising explanation for this behavior.
APA
Farhang, A.R., Bernstein, J.D., Tirumala, K., Liu, Y. & Yue, Y.. (2022). Investigating Generalization by Controlling Normalized Margin. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:6324-6336 Available from https://proceedings.mlr.press/v162/farhang22a.html.

Related Material