Depth Separation for Neural Networks

Amit Daniely
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:690-696, 2017.

Abstract

Let $f:\mathbb{S}^d-1\times \mathbb{S}^d-1\to\mathbb{S}$ be a function of the form $f(x,x’) = g(⟨x,x’⟩)$ for $g:[-1,1]\to \mathbb{R}$. We give a simple proof that shows that poly-size depth two neural networks with (exponentially) bounded weights cannot approximate $f$ whenever $g$ cannot be approximated by a low degree polynomial. Moreover, for many $g$’s, such as $g(x)=\sin(\pi d^3x)$, the number of neurons must be $2^Ω\left(d\log(d)\right)$. Furthermore, the result holds w.r.t. the uniform distribution on $\mathbb{S}^d-1\times \mathbb{S}^d-1$. As many functions of the above form can be well approximated by poly-size depth three networks with poly-bounded weights, this establishes a separation between depth two and depth three networks w.r.t. the uniform distribution on $\mathbb{S}^d-1\times \mathbb{S}^d-1$.

Cite this Paper


BibTeX
@InProceedings{pmlr-v65-daniely17a, title = {Depth Separation for Neural Networks}, author = {Daniely, Amit}, booktitle = {Proceedings of the 2017 Conference on Learning Theory}, pages = {690--696}, year = {2017}, editor = {Kale, Satyen and Shamir, Ohad}, volume = {65}, series = {Proceedings of Machine Learning Research}, month = {07--10 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v65/daniely17a/daniely17a.pdf}, url = {https://proceedings.mlr.press/v65/daniely17a.html}, abstract = {Let $f:\mathbb{S}^d-1\times \mathbb{S}^d-1\to\mathbb{S}$ be a function of the form $f(x,x’) = g(⟨x,x’⟩)$ for $g:[-1,1]\to \mathbb{R}$. We give a simple proof that shows that poly-size depth two neural networks with (exponentially) bounded weights cannot approximate $f$ whenever $g$ cannot be approximated by a low degree polynomial. Moreover, for many $g$’s, such as $g(x)=\sin(\pi d^3x)$, the number of neurons must be $2^Ω\left(d\log(d)\right)$. Furthermore, the result holds w.r.t. the uniform distribution on $\mathbb{S}^d-1\times \mathbb{S}^d-1$. As many functions of the above form can be well approximated by poly-size depth three networks with poly-bounded weights, this establishes a separation between depth two and depth three networks w.r.t. the uniform distribution on $\mathbb{S}^d-1\times \mathbb{S}^d-1$.} }
Endnote
%0 Conference Paper %T Depth Separation for Neural Networks %A Amit Daniely %B Proceedings of the 2017 Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2017 %E Satyen Kale %E Ohad Shamir %F pmlr-v65-daniely17a %I PMLR %P 690--696 %U https://proceedings.mlr.press/v65/daniely17a.html %V 65 %X Let $f:\mathbb{S}^d-1\times \mathbb{S}^d-1\to\mathbb{S}$ be a function of the form $f(x,x’) = g(⟨x,x’⟩)$ for $g:[-1,1]\to \mathbb{R}$. We give a simple proof that shows that poly-size depth two neural networks with (exponentially) bounded weights cannot approximate $f$ whenever $g$ cannot be approximated by a low degree polynomial. Moreover, for many $g$’s, such as $g(x)=\sin(\pi d^3x)$, the number of neurons must be $2^Ω\left(d\log(d)\right)$. Furthermore, the result holds w.r.t. the uniform distribution on $\mathbb{S}^d-1\times \mathbb{S}^d-1$. As many functions of the above form can be well approximated by poly-size depth three networks with poly-bounded weights, this establishes a separation between depth two and depth three networks w.r.t. the uniform distribution on $\mathbb{S}^d-1\times \mathbb{S}^d-1$.
APA
Daniely, A.. (2017). Depth Separation for Neural Networks. Proceedings of the 2017 Conference on Learning Theory, in Proceedings of Machine Learning Research 65:690-696 Available from https://proceedings.mlr.press/v65/daniely17a.html.

Related Material