Neural Networks are Convex Regularizers: Exact Polynomial-time Convex Optimization Formulations for Two-layer Networks

Mert Pilanci, Tolga Ergen
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:7695-7705, 2020.

Abstract

We develop exact representations of training two-layer neural networks with rectified linear units (ReLUs) in terms of a single convex program with number of variables polynomial in the number of training samples and the number of hidden neurons. Our theory utilizes semi-infinite duality and minimum norm regularization. We show that ReLU networks trained with standard weight decay are equivalent to block $\ell_1$ penalized convex models. Moreover, we show that certain standard convolutional linear networks are equivalent semi-definite programs which can be simplified to $\ell_1$ regularized linear models in a polynomial sized discrete Fourier feature space

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-pilanci20a, title = {Neural Networks are Convex Regularizers: Exact Polynomial-time Convex Optimization Formulations for Two-layer Networks}, author = {Pilanci, Mert and Ergen, Tolga}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {7695--7705}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/pilanci20a/pilanci20a.pdf}, url = {https://proceedings.mlr.press/v119/pilanci20a.html}, abstract = {We develop exact representations of training two-layer neural networks with rectified linear units (ReLUs) in terms of a single convex program with number of variables polynomial in the number of training samples and the number of hidden neurons. Our theory utilizes semi-infinite duality and minimum norm regularization. We show that ReLU networks trained with standard weight decay are equivalent to block $\ell_1$ penalized convex models. Moreover, we show that certain standard convolutional linear networks are equivalent semi-definite programs which can be simplified to $\ell_1$ regularized linear models in a polynomial sized discrete Fourier feature space} }
Endnote
%0 Conference Paper %T Neural Networks are Convex Regularizers: Exact Polynomial-time Convex Optimization Formulations for Two-layer Networks %A Mert Pilanci %A Tolga Ergen %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-pilanci20a %I PMLR %P 7695--7705 %U https://proceedings.mlr.press/v119/pilanci20a.html %V 119 %X We develop exact representations of training two-layer neural networks with rectified linear units (ReLUs) in terms of a single convex program with number of variables polynomial in the number of training samples and the number of hidden neurons. Our theory utilizes semi-infinite duality and minimum norm regularization. We show that ReLU networks trained with standard weight decay are equivalent to block $\ell_1$ penalized convex models. Moreover, we show that certain standard convolutional linear networks are equivalent semi-definite programs which can be simplified to $\ell_1$ regularized linear models in a polynomial sized discrete Fourier feature space
APA
Pilanci, M. & Ergen, T.. (2020). Neural Networks are Convex Regularizers: Exact Polynomial-time Convex Optimization Formulations for Two-layer Networks. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:7695-7705 Available from https://proceedings.mlr.press/v119/pilanci20a.html.

Related Material