Penalizing Gradient Norm for Efficiently Improving Generalization in Deep Learning

Yang Zhao, Hao Zhang, Xiuyuan Hu
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:26982-26992, 2022.

Abstract

How to train deep neural networks (DNNs) to generalize well is a central concern in deep learning, especially for severely overparameterized networks nowadays. In this paper, we propose an effective method to improve the model generalization by additionally penalizing the gradient norm of loss function during optimization. We demonstrate that confining the gradient norm of loss function could help lead the optimizers towards finding flat minima. We leverage the first-order approximation to efficiently implement the corresponding gradient to fit well in the gradient descent framework. In our experiments, we confirm that when using our methods, generalization performance of various models could be improved on different datasets. Also, we show that the recent sharpness-aware minimization method (Foretet al., 2021) is a special, but not the best, case of our method, where the best case of our method could give new state-of-art performance on these tasks. Code is available at https://github.com/zhaoyang-0204/gnp.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-zhao22i, title = {Penalizing Gradient Norm for Efficiently Improving Generalization in Deep Learning}, author = {Zhao, Yang and Zhang, Hao and Hu, Xiuyuan}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {26982--26992}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/zhao22i/zhao22i.pdf}, url = {https://proceedings.mlr.press/v162/zhao22i.html}, abstract = {How to train deep neural networks (DNNs) to generalize well is a central concern in deep learning, especially for severely overparameterized networks nowadays. In this paper, we propose an effective method to improve the model generalization by additionally penalizing the gradient norm of loss function during optimization. We demonstrate that confining the gradient norm of loss function could help lead the optimizers towards finding flat minima. We leverage the first-order approximation to efficiently implement the corresponding gradient to fit well in the gradient descent framework. In our experiments, we confirm that when using our methods, generalization performance of various models could be improved on different datasets. Also, we show that the recent sharpness-aware minimization method (Foretet al., 2021) is a special, but not the best, case of our method, where the best case of our method could give new state-of-art performance on these tasks. Code is available at https://github.com/zhaoyang-0204/gnp.} }
Endnote
%0 Conference Paper %T Penalizing Gradient Norm for Efficiently Improving Generalization in Deep Learning %A Yang Zhao %A Hao Zhang %A Xiuyuan Hu %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-zhao22i %I PMLR %P 26982--26992 %U https://proceedings.mlr.press/v162/zhao22i.html %V 162 %X How to train deep neural networks (DNNs) to generalize well is a central concern in deep learning, especially for severely overparameterized networks nowadays. In this paper, we propose an effective method to improve the model generalization by additionally penalizing the gradient norm of loss function during optimization. We demonstrate that confining the gradient norm of loss function could help lead the optimizers towards finding flat minima. We leverage the first-order approximation to efficiently implement the corresponding gradient to fit well in the gradient descent framework. In our experiments, we confirm that when using our methods, generalization performance of various models could be improved on different datasets. Also, we show that the recent sharpness-aware minimization method (Foretet al., 2021) is a special, but not the best, case of our method, where the best case of our method could give new state-of-art performance on these tasks. Code is available at https://github.com/zhaoyang-0204/gnp.
APA
Zhao, Y., Zhang, H. & Hu, X.. (2022). Penalizing Gradient Norm for Efficiently Improving Generalization in Deep Learning. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:26982-26992 Available from https://proceedings.mlr.press/v162/zhao22i.html.

Related Material