Benign Overfitting in Two-layer ReLU Convolutional Neural Networks

Yiwen Kou, Zixiang Chen, Yuanzhou Chen, Quanquan Gu
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:17615-17659, 2023.

Abstract

Modern deep learning models with great expressive power can be trained to overfit the training data but still generalize well. This phenomenon is referred to as benign overfitting. Recently, a few studies have attempted to theoretically understand benign overfitting in neural networks. However, these works are either limited to neural networks with smooth activation functions or to the neural tangent kernel regime. How and when benign overfitting can occur in ReLU neural networks remains an open problem. In this work, we seek to answer this question by establishing algorithm-dependent risk bounds for learning two-layer ReLU convolutional neural networks with label-flipping noise. We show that, under mild conditions, the neural network trained by gradient descent can achieve near-zero training loss and Bayes optimal test risk. Our result also reveals a sharp transition between benign and harmful overfitting under different conditions on data distribution in terms of test risk. Experiments on synthetic data back up our theory.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-kou23a, title = {Benign Overfitting in Two-layer {R}e{LU} Convolutional Neural Networks}, author = {Kou, Yiwen and Chen, Zixiang and Chen, Yuanzhou and Gu, Quanquan}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {17615--17659}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/kou23a/kou23a.pdf}, url = {https://proceedings.mlr.press/v202/kou23a.html}, abstract = {Modern deep learning models with great expressive power can be trained to overfit the training data but still generalize well. This phenomenon is referred to as benign overfitting. Recently, a few studies have attempted to theoretically understand benign overfitting in neural networks. However, these works are either limited to neural networks with smooth activation functions or to the neural tangent kernel regime. How and when benign overfitting can occur in ReLU neural networks remains an open problem. In this work, we seek to answer this question by establishing algorithm-dependent risk bounds for learning two-layer ReLU convolutional neural networks with label-flipping noise. We show that, under mild conditions, the neural network trained by gradient descent can achieve near-zero training loss and Bayes optimal test risk. Our result also reveals a sharp transition between benign and harmful overfitting under different conditions on data distribution in terms of test risk. Experiments on synthetic data back up our theory.} }
Endnote
%0 Conference Paper %T Benign Overfitting in Two-layer ReLU Convolutional Neural Networks %A Yiwen Kou %A Zixiang Chen %A Yuanzhou Chen %A Quanquan Gu %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-kou23a %I PMLR %P 17615--17659 %U https://proceedings.mlr.press/v202/kou23a.html %V 202 %X Modern deep learning models with great expressive power can be trained to overfit the training data but still generalize well. This phenomenon is referred to as benign overfitting. Recently, a few studies have attempted to theoretically understand benign overfitting in neural networks. However, these works are either limited to neural networks with smooth activation functions or to the neural tangent kernel regime. How and when benign overfitting can occur in ReLU neural networks remains an open problem. In this work, we seek to answer this question by establishing algorithm-dependent risk bounds for learning two-layer ReLU convolutional neural networks with label-flipping noise. We show that, under mild conditions, the neural network trained by gradient descent can achieve near-zero training loss and Bayes optimal test risk. Our result also reveals a sharp transition between benign and harmful overfitting under different conditions on data distribution in terms of test risk. Experiments on synthetic data back up our theory.
APA
Kou, Y., Chen, Z., Chen, Y. & Gu, Q.. (2023). Benign Overfitting in Two-layer ReLU Convolutional Neural Networks. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:17615-17659 Available from https://proceedings.mlr.press/v202/kou23a.html.

Related Material