Exploring Local Norms in Exp-concave Statistical Learning

Nikita Puchkin, Nikita Zhivotovskiy
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:1993-2013, 2023.

Abstract

We consider the standard problem of stochastic convex optimization with exp-concave losses using Empirical Risk Minimization in a convex class. Answering a question raised in several prior works, we provide a $O ( d/n + 1/n \log( 1 / \delta ) )$ excess risk bound valid for a wide class of bounded exp-concave losses, where $d$ is the dimension of the convex reference set, $n$ is the sample size, and $\delta$ is the confidence level. Our result is based on a unified geometric assumption on the gradient of losses and the notion of local norms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-puchkin23a, title = {Exploring Local Norms in Exp-concave Statistical Learning}, author = {Puchkin, Nikita and Zhivotovskiy, Nikita}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {1993--2013}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/puchkin23a/puchkin23a.pdf}, url = {https://proceedings.mlr.press/v195/puchkin23a.html}, abstract = {We consider the standard problem of stochastic convex optimization with exp-concave losses using Empirical Risk Minimization in a convex class. Answering a question raised in several prior works, we provide a $O ( d/n + 1/n \log( 1 / \delta ) )$ excess risk bound valid for a wide class of bounded exp-concave losses, where $d$ is the dimension of the convex reference set, $n$ is the sample size, and $\delta$ is the confidence level. Our result is based on a unified geometric assumption on the gradient of losses and the notion of local norms.} }
Endnote
%0 Conference Paper %T Exploring Local Norms in Exp-concave Statistical Learning %A Nikita Puchkin %A Nikita Zhivotovskiy %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-puchkin23a %I PMLR %P 1993--2013 %U https://proceedings.mlr.press/v195/puchkin23a.html %V 195 %X We consider the standard problem of stochastic convex optimization with exp-concave losses using Empirical Risk Minimization in a convex class. Answering a question raised in several prior works, we provide a $O ( d/n + 1/n \log( 1 / \delta ) )$ excess risk bound valid for a wide class of bounded exp-concave losses, where $d$ is the dimension of the convex reference set, $n$ is the sample size, and $\delta$ is the confidence level. Our result is based on a unified geometric assumption on the gradient of losses and the notion of local norms.
APA
Puchkin, N. & Zhivotovskiy, N.. (2023). Exploring Local Norms in Exp-concave Statistical Learning. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:1993-2013 Available from https://proceedings.mlr.press/v195/puchkin23a.html.

Related Material