A Kernel Perspective for Regularizing Deep Neural Networks

Alberto Bietti, Grégoire Mialon, Dexiong Chen, Julien Mairal
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:664-674, 2019.

Abstract

We propose a new point of view for regularizing deep neural networks by using the norm of a reproducing kernel Hilbert space (RKHS). Even though this norm cannot be computed, it admits upper and lower approximations leading to various practical strategies. Specifically, this perspective (i) provides a common umbrella for many existing regularization principles, including spectral norm and gradient penalties, or adversarial training, (ii) leads to new effective regularization penalties, and (iii) suggests hybrid strategies combining lower and upper bounds to get better approximations of the RKHS norm. We experimentally show this approach to be effective when learning on small datasets, or to obtain adversarially robust models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-bietti19a, title = {A Kernel Perspective for Regularizing Deep Neural Networks}, author = {Bietti, Alberto and Mialon, Gr{\'e}goire and Chen, Dexiong and Mairal, Julien}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {664--674}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/bietti19a/bietti19a.pdf}, url = {https://proceedings.mlr.press/v97/bietti19a.html}, abstract = {We propose a new point of view for regularizing deep neural networks by using the norm of a reproducing kernel Hilbert space (RKHS). Even though this norm cannot be computed, it admits upper and lower approximations leading to various practical strategies. Specifically, this perspective (i) provides a common umbrella for many existing regularization principles, including spectral norm and gradient penalties, or adversarial training, (ii) leads to new effective regularization penalties, and (iii) suggests hybrid strategies combining lower and upper bounds to get better approximations of the RKHS norm. We experimentally show this approach to be effective when learning on small datasets, or to obtain adversarially robust models.} }
Endnote
%0 Conference Paper %T A Kernel Perspective for Regularizing Deep Neural Networks %A Alberto Bietti %A Grégoire Mialon %A Dexiong Chen %A Julien Mairal %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-bietti19a %I PMLR %P 664--674 %U https://proceedings.mlr.press/v97/bietti19a.html %V 97 %X We propose a new point of view for regularizing deep neural networks by using the norm of a reproducing kernel Hilbert space (RKHS). Even though this norm cannot be computed, it admits upper and lower approximations leading to various practical strategies. Specifically, this perspective (i) provides a common umbrella for many existing regularization principles, including spectral norm and gradient penalties, or adversarial training, (ii) leads to new effective regularization penalties, and (iii) suggests hybrid strategies combining lower and upper bounds to get better approximations of the RKHS norm. We experimentally show this approach to be effective when learning on small datasets, or to obtain adversarially robust models.
APA
Bietti, A., Mialon, G., Chen, D. & Mairal, J.. (2019). A Kernel Perspective for Regularizing Deep Neural Networks. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:664-674 Available from https://proceedings.mlr.press/v97/bietti19a.html.

Related Material