Learning Compact Neural Networks with Regularization

Samet Oymak
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:3966-3975, 2018.

Abstract

Proper regularization is critical for speeding up training, improving generalization performance, and learning compact models that are cost efficient. We propose and analyze regularized gradient descent algorithms for learning shallow neural networks. Our framework is general and covers weight-sharing (convolutional networks), sparsity (network pruning), and low-rank constraints among others. We first introduce covering dimension to quantify the complexity of the constraint set and provide insights on the generalization properties. Then, we show that proposed algorithms become well-behaved and local linear convergence occurs once the amount of data exceeds the covering dimension. Overall, our results demonstrate that near-optimal sample complexity is sufficient for efficient learning and illustrate how regularization can be beneficial to learn over-parameterized networks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-oymak18a, title = {Learning Compact Neural Networks with Regularization}, author = {Oymak, Samet}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {3966--3975}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/oymak18a/oymak18a.pdf}, url = {https://proceedings.mlr.press/v80/oymak18a.html}, abstract = {Proper regularization is critical for speeding up training, improving generalization performance, and learning compact models that are cost efficient. We propose and analyze regularized gradient descent algorithms for learning shallow neural networks. Our framework is general and covers weight-sharing (convolutional networks), sparsity (network pruning), and low-rank constraints among others. We first introduce covering dimension to quantify the complexity of the constraint set and provide insights on the generalization properties. Then, we show that proposed algorithms become well-behaved and local linear convergence occurs once the amount of data exceeds the covering dimension. Overall, our results demonstrate that near-optimal sample complexity is sufficient for efficient learning and illustrate how regularization can be beneficial to learn over-parameterized networks.} }
Endnote
%0 Conference Paper %T Learning Compact Neural Networks with Regularization %A Samet Oymak %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-oymak18a %I PMLR %P 3966--3975 %U https://proceedings.mlr.press/v80/oymak18a.html %V 80 %X Proper regularization is critical for speeding up training, improving generalization performance, and learning compact models that are cost efficient. We propose and analyze regularized gradient descent algorithms for learning shallow neural networks. Our framework is general and covers weight-sharing (convolutional networks), sparsity (network pruning), and low-rank constraints among others. We first introduce covering dimension to quantify the complexity of the constraint set and provide insights on the generalization properties. Then, we show that proposed algorithms become well-behaved and local linear convergence occurs once the amount of data exceeds the covering dimension. Overall, our results demonstrate that near-optimal sample complexity is sufficient for efficient learning and illustrate how regularization can be beneficial to learn over-parameterized networks.
APA
Oymak, S.. (2018). Learning Compact Neural Networks with Regularization. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:3966-3975 Available from https://proceedings.mlr.press/v80/oymak18a.html.

Related Material