Fast rates with high probability in expconcave statistical learning
[edit]
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:10851093, 2017.
Abstract
We present an algorithm for the statistical learning setting with a bounded expconcave loss in d dimensions that obtains excess risk $O(d \log(1/δ)/n)$ with probability $1  δ$. The core technique is to boost the confidence of recent inexpectation O(d/n) excess risk bounds for empirical risk minimization (ERM), without sacrificing the rate, by leveraging a Bernstein condition which holds due to expconcavity. We also show that a regret bound for any online learner in this setting translates to a high probability excess risk bound for the corresponding onlinetobatch conversion of the online learner. Lastly, we present high probability bounds for the expconcave model selection aggregation problem that are quantileadaptive in a certain sense. One bound obtains a nearly optimal rate without requiring the loss to be Lipschitz continuous, and another requires Lipschitz continuity but obtains the optimal rate.
Related Material


