[edit]
Empirical Risk Minimization for Stochastic Convex Optimization: $O(1/n)$- and $O(1/n^2)$-type of Risk Bounds
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1954-1979, 2017.
Abstract
Although there exist plentiful theories of empirical risk minimization (ERM) for supervised learning, current theoretical understandings of ERM for a related problem—stochastic convex optimization (SCO), are limited. In this work, we strengthen the realm of ERM for SCO by exploiting smoothness and strong convexity conditions to improve the risk bounds. First, we establish an $\widetilde{O}(d/n + \sqrt{F}_*/n)$ risk bound when the random function is nonnegative, convex and smooth, and the expected function is Lipschitz continuous, where $d$ is the dimensionality of the problem, $n$ is the number of samples, and $F_*$ is the minimal risk. Thus, when $F_*$ is small we obtain an $\widetilde{O}(d/n)$ risk bound, which is analogous to the $\widetilde{O}(1/n)$ optimistic rate of ERM for supervised learning. Second, if the objective function is also $λ$-strongly convex, we prove an $\widetilde{O}(d/n + κF_*/n )$ risk bound where $κ$ is the condition number, and improve it to $O(1/[λn^2] + κF_*/n)$ when $n=\widetilde{Ω}(κd)$. As a result, we obtain an $O(κ/n^2)$ risk bound under the condition that $n$ is large and $F_*$ is small, which to the best of our knowledge, is the first $O(1/n^2)$-type of risk bound of ERM. Third, we stress that the above results are established in a unified framework, which allows us to derive new risk bounds under weaker conditions, e.g., without convexity of the random function. Finally, we demonstrate that to achieve an $O(1/[λn^2] + κF_*/n)$ risk bound for supervised learning, the $\widetilde{Ω}(κd)$ requirement on $n$ can be replaced with $Ω(κ^2)$, which is dimensionality-independent.