Optimal Sets and Solution Paths of ReLU Networks

Aaron Mishkin, Mert Pilanci
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:24888-24924, 2023.

Abstract

We develop an analytical framework to characterize the set of optimal ReLU neural networks by reformulating the non-convex training problem as a convex program. We show that the global optima of the convex parameterization are given by a polyhedral set and then extend this characterization to the optimal set of the non-convex training objective. Since all stationary points of the ReLU training problem can be represented as optima of sub-sampled convex programs, our work provide a general expression for all critical points of the non-convex objective. We then leverage our results to provide an optimal pruning algorithm for computing minimal networks, establish conditions for the regularization path of ReLU networks to be continuous, and develop sensitivity results for minimal ReLU networks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-mishkin23a, title = {Optimal Sets and Solution Paths of {R}e{LU} Networks}, author = {Mishkin, Aaron and Pilanci, Mert}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {24888--24924}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/mishkin23a/mishkin23a.pdf}, url = {https://proceedings.mlr.press/v202/mishkin23a.html}, abstract = {We develop an analytical framework to characterize the set of optimal ReLU neural networks by reformulating the non-convex training problem as a convex program. We show that the global optima of the convex parameterization are given by a polyhedral set and then extend this characterization to the optimal set of the non-convex training objective. Since all stationary points of the ReLU training problem can be represented as optima of sub-sampled convex programs, our work provide a general expression for all critical points of the non-convex objective. We then leverage our results to provide an optimal pruning algorithm for computing minimal networks, establish conditions for the regularization path of ReLU networks to be continuous, and develop sensitivity results for minimal ReLU networks.} }
Endnote
%0 Conference Paper %T Optimal Sets and Solution Paths of ReLU Networks %A Aaron Mishkin %A Mert Pilanci %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-mishkin23a %I PMLR %P 24888--24924 %U https://proceedings.mlr.press/v202/mishkin23a.html %V 202 %X We develop an analytical framework to characterize the set of optimal ReLU neural networks by reformulating the non-convex training problem as a convex program. We show that the global optima of the convex parameterization are given by a polyhedral set and then extend this characterization to the optimal set of the non-convex training objective. Since all stationary points of the ReLU training problem can be represented as optima of sub-sampled convex programs, our work provide a general expression for all critical points of the non-convex objective. We then leverage our results to provide an optimal pruning algorithm for computing minimal networks, establish conditions for the regularization path of ReLU networks to be continuous, and develop sensitivity results for minimal ReLU networks.
APA
Mishkin, A. & Pilanci, M.. (2023). Optimal Sets and Solution Paths of ReLU Networks. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:24888-24924 Available from https://proceedings.mlr.press/v202/mishkin23a.html.

Related Material