Optimal approximation of continuous functions by very deep ReLU networks
[edit]
Proceedings of the 31st Conference On Learning Theory, PMLR 75:639649, 2018.
Abstract
We consider approximations of general continuous functions on finitedimensional cubes by general deep ReLU neural networks and study the approximation rates with respect to the modulus of continuity of the function and the total number of weights $W$ in the network. We establish the complete phase diagram of feasible approximation rates and show that it includes two distinct phases. One phase corresponds to slower approximations that can be achieved with constantdepth networks and continuous weight assignments. The other phase provides faster approximations at the cost of depths necessarily growing as a power law $L\sim W^{\alpha}, 0<\alpha\le 1,$ and with necessarily discontinuous weight assignments. In particular, we prove that constantwidth fullyconnected networks of depth $L\sim W$ provide the fastest possible approximation rate $\f\widetilde f\_\infty = O(\omega_f(O(W^{2/\nu})))$ that cannot be achieved with less deep networks.
Related Material


