Nearly-tight VC-dimension bounds for piecewise linear neural networks

Nick Harvey, Christopher Liaw, Abbas Mehrabian
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1064-1068, 2017.

Abstract

We prove new upper and lower bounds on the VC-dimension of deep neural networks with the ReLU activation function. These bounds are tight for almost the entire range of parameters. Letting $W$ be the number of weights and $L$ be the number of layers, we prove that the VC-dimension is $O(W L \log(W))$, and provide examples with VC-dimension $Ω( W L \log(W/L) )$. This improves both the previously known upper bounds and lower bounds. In terms of the number $U$ of non-linear units, we prove a tight bound $Θ(W U)$ on the VC-dimension. All of these results generalize to arbitrary piecewise linear activation functions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v65-harvey17a, title = {Nearly-tight {VC}-dimension bounds for piecewise linear neural networks}, author = {Harvey, Nick and Liaw, Christopher and Mehrabian, Abbas}, booktitle = {Proceedings of the 2017 Conference on Learning Theory}, pages = {1064--1068}, year = {2017}, editor = {Kale, Satyen and Shamir, Ohad}, volume = {65}, series = {Proceedings of Machine Learning Research}, month = {07--10 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v65/harvey17a/harvey17a.pdf}, url = {https://proceedings.mlr.press/v65/harvey17a.html}, abstract = {We prove new upper and lower bounds on the VC-dimension of deep neural networks with the ReLU activation function. These bounds are tight for almost the entire range of parameters. Letting $W$ be the number of weights and $L$ be the number of layers, we prove that the VC-dimension is $O(W L \log(W))$, and provide examples with VC-dimension $Ω( W L \log(W/L) )$. This improves both the previously known upper bounds and lower bounds. In terms of the number $U$ of non-linear units, we prove a tight bound $Θ(W U)$ on the VC-dimension. All of these results generalize to arbitrary piecewise linear activation functions.} }
Endnote
%0 Conference Paper %T Nearly-tight VC-dimension bounds for piecewise linear neural networks %A Nick Harvey %A Christopher Liaw %A Abbas Mehrabian %B Proceedings of the 2017 Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2017 %E Satyen Kale %E Ohad Shamir %F pmlr-v65-harvey17a %I PMLR %P 1064--1068 %U https://proceedings.mlr.press/v65/harvey17a.html %V 65 %X We prove new upper and lower bounds on the VC-dimension of deep neural networks with the ReLU activation function. These bounds are tight for almost the entire range of parameters. Letting $W$ be the number of weights and $L$ be the number of layers, we prove that the VC-dimension is $O(W L \log(W))$, and provide examples with VC-dimension $Ω( W L \log(W/L) )$. This improves both the previously known upper bounds and lower bounds. In terms of the number $U$ of non-linear units, we prove a tight bound $Θ(W U)$ on the VC-dimension. All of these results generalize to arbitrary piecewise linear activation functions.
APA
Harvey, N., Liaw, C. & Mehrabian, A.. (2017). Nearly-tight VC-dimension bounds for piecewise linear neural networks. Proceedings of the 2017 Conference on Learning Theory, in Proceedings of Machine Learning Research 65:1064-1068 Available from https://proceedings.mlr.press/v65/harvey17a.html.

Related Material