[edit]
Depth and Feature Learning are Provably Beneficial for Neural Network Discriminators
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:421-447, 2022.
Abstract
We construct pairs of distributions μd,νd on Rd such that the quantity |Ex∼μd[F(x)]−Ex∼νd[F(x)]| decreases as Ω(1/d2) for some three-layer ReLU network F with polynomial width and weights, while declining exponentially in d if F is any two-layer network with polynomial weights. This shows that deep GAN discriminators are able to distinguish distributions that shallow discriminators cannot. Analogously, we build pairs of distributions μd,νd on Rd such that |Ex∼μd[F(x)]−Ex∼νd[F(x)]| decreases as Ω(1/(dlogd)) for two-layer ReLU networks with polynomial weights, while declining exponentially for bounded-norm functions in the associated RKHS. This confirms that feature learning is beneficial for discriminators. Our bounds are based on Fourier transforms.