[edit]
Depth Separation for Neural Networks
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:690-696, 2017.
Abstract
Let f:Sd−1×Sd−1→S be a function of the form f(x,x′)=g(⟨x,x′⟩) for g:[−1,1]→R. We give a simple proof that shows that poly-size depth two neural networks with (exponentially) bounded weights cannot approximate f whenever g cannot be approximated by a low degree polynomial. Moreover, for many g’s, such as g(x)=sin(πd3x), the number of neurons must be 2^Ω\left(d\log(d)\right). Furthermore, the result holds w.r.t. the uniform distribution on \mathbb{S}^d-1\times \mathbb{S}^d-1. As many functions of the above form can be well approximated by poly-size depth three networks with poly-bounded weights, this establishes a separation between depth two and depth three networks w.r.t. the uniform distribution on \mathbb{S}^d-1\times \mathbb{S}^d-1.