Efficient improper learning for online logistic regression

Rémi Jézéquel, Pierre Gaillard, Alessandro Rudi
Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:2085-2108, 2020.

Abstract

We consider the setting of online logistic regression and consider the regret with respect to the $\ell_2$-ball of radius $B$. It is known (see Hazan et al. (2014)) that any proper algorithm which has logarithmic regret in the number of samples (denoted $n$) necessarily suffers an exponential multiplicative constant in $B$. In this work, we design an efficient improper algorithm that avoids this exponential constant while preserving a logarithmic regret. Indeed, Foster et al. (2018) showed that the lower bound does not apply to improper algorithms and proposed a strategy based on exponential weights with prohibitive computational complexity. Our new algorithm based on regularized empirical risk minimization with surrogate losses satisfies a regret scaling as $O(B\log(Bn))$ with a per-round time-complexity of order $O(d^2 + \log(n))$.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-jezequel20a, title = {Efficient improper learning for online logistic regression}, author = {J{\'e}z{\'e}quel, R{\'e}mi and Gaillard, Pierre and Rudi, Alessandro}, booktitle = {Proceedings of Thirty Third Conference on Learning Theory}, pages = {2085--2108}, year = {2020}, editor = {Abernethy, Jacob and Agarwal, Shivani}, volume = {125}, series = {Proceedings of Machine Learning Research}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/jezequel20a/jezequel20a.pdf}, url = {https://proceedings.mlr.press/v125/jezequel20a.html}, abstract = { We consider the setting of online logistic regression and consider the regret with respect to the $\ell_2$-ball of radius $B$. It is known (see Hazan et al. (2014)) that any proper algorithm which has logarithmic regret in the number of samples (denoted $n$) necessarily suffers an exponential multiplicative constant in $B$. In this work, we design an efficient improper algorithm that avoids this exponential constant while preserving a logarithmic regret. Indeed, Foster et al. (2018) showed that the lower bound does not apply to improper algorithms and proposed a strategy based on exponential weights with prohibitive computational complexity. Our new algorithm based on regularized empirical risk minimization with surrogate losses satisfies a regret scaling as $O(B\log(Bn))$ with a per-round time-complexity of order $O(d^2 + \log(n))$.} }
Endnote
%0 Conference Paper %T Efficient improper learning for online logistic regression %A Rémi Jézéquel %A Pierre Gaillard %A Alessandro Rudi %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-jezequel20a %I PMLR %P 2085--2108 %U https://proceedings.mlr.press/v125/jezequel20a.html %V 125 %X We consider the setting of online logistic regression and consider the regret with respect to the $\ell_2$-ball of radius $B$. It is known (see Hazan et al. (2014)) that any proper algorithm which has logarithmic regret in the number of samples (denoted $n$) necessarily suffers an exponential multiplicative constant in $B$. In this work, we design an efficient improper algorithm that avoids this exponential constant while preserving a logarithmic regret. Indeed, Foster et al. (2018) showed that the lower bound does not apply to improper algorithms and proposed a strategy based on exponential weights with prohibitive computational complexity. Our new algorithm based on regularized empirical risk minimization with surrogate losses satisfies a regret scaling as $O(B\log(Bn))$ with a per-round time-complexity of order $O(d^2 + \log(n))$.
APA
Jézéquel, R., Gaillard, P. & Rudi, A.. (2020). Efficient improper learning for online logistic regression. Proceedings of Thirty Third Conference on Learning Theory, in Proceedings of Machine Learning Research 125:2085-2108 Available from https://proceedings.mlr.press/v125/jezequel20a.html.

Related Material