Online learning with kernel losses

Niladri Chatterji, Aldo Pacchiano, Peter Bartlett
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:971-980, 2019.

Abstract

We present a generalization of the adversarial linear bandits framework, where the underlying losses are kernel functions (with an associated reproducing kernel Hilbert space) rather than linear functions. We study a version of the exponential weights algorithm and bound its regret in this setting. Under conditions on the eigen-decay of the kernel we provide a sharp characterization of the regret for this algorithm. When we have polynomial eigen-decay ($\mu_j \le \mathcal{O}(j^{-\beta})$), we find that the regret is bounded by $\mathcal{R}_n \le \mathcal{O}(n^{\beta/2(\beta-1)})$. While under the assumption of exponential eigen-decay ($\mu_j \le \mathcal{O}(e^{-\beta j })$) we get an even tighter bound on the regret $\mathcal{R}_n \le \tilde{\mathcal{O}}(n^{1/2})$. When the eigen-decay is polynomial we also show a non-matching minimax lower bound on the regret of $\mathcal{R}_n \ge \Omega(n^{(\beta+1)/2\beta})$ and a lower bound of $\mathcal{R}_n \ge \Omega(n^{1/2})$ when the decay in the eigen-values is exponentially fast. We also study the full information setting when the underlying losses are kernel functions and present an adapted exponential weights algorithm and a conditional gradient descent algorithm.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-chatterji19a, title = {Online learning with kernel losses}, author = {Chatterji, Niladri and Pacchiano, Aldo and Bartlett, Peter}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {971--980}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/chatterji19a/chatterji19a.pdf}, url = {https://proceedings.mlr.press/v97/chatterji19a.html}, abstract = {We present a generalization of the adversarial linear bandits framework, where the underlying losses are kernel functions (with an associated reproducing kernel Hilbert space) rather than linear functions. We study a version of the exponential weights algorithm and bound its regret in this setting. Under conditions on the eigen-decay of the kernel we provide a sharp characterization of the regret for this algorithm. When we have polynomial eigen-decay ($\mu_j \le \mathcal{O}(j^{-\beta})$), we find that the regret is bounded by $\mathcal{R}_n \le \mathcal{O}(n^{\beta/2(\beta-1)})$. While under the assumption of exponential eigen-decay ($\mu_j \le \mathcal{O}(e^{-\beta j })$) we get an even tighter bound on the regret $\mathcal{R}_n \le \tilde{\mathcal{O}}(n^{1/2})$. When the eigen-decay is polynomial we also show a non-matching minimax lower bound on the regret of $\mathcal{R}_n \ge \Omega(n^{(\beta+1)/2\beta})$ and a lower bound of $\mathcal{R}_n \ge \Omega(n^{1/2})$ when the decay in the eigen-values is exponentially fast. We also study the full information setting when the underlying losses are kernel functions and present an adapted exponential weights algorithm and a conditional gradient descent algorithm.} }
Endnote
%0 Conference Paper %T Online learning with kernel losses %A Niladri Chatterji %A Aldo Pacchiano %A Peter Bartlett %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-chatterji19a %I PMLR %P 971--980 %U https://proceedings.mlr.press/v97/chatterji19a.html %V 97 %X We present a generalization of the adversarial linear bandits framework, where the underlying losses are kernel functions (with an associated reproducing kernel Hilbert space) rather than linear functions. We study a version of the exponential weights algorithm and bound its regret in this setting. Under conditions on the eigen-decay of the kernel we provide a sharp characterization of the regret for this algorithm. When we have polynomial eigen-decay ($\mu_j \le \mathcal{O}(j^{-\beta})$), we find that the regret is bounded by $\mathcal{R}_n \le \mathcal{O}(n^{\beta/2(\beta-1)})$. While under the assumption of exponential eigen-decay ($\mu_j \le \mathcal{O}(e^{-\beta j })$) we get an even tighter bound on the regret $\mathcal{R}_n \le \tilde{\mathcal{O}}(n^{1/2})$. When the eigen-decay is polynomial we also show a non-matching minimax lower bound on the regret of $\mathcal{R}_n \ge \Omega(n^{(\beta+1)/2\beta})$ and a lower bound of $\mathcal{R}_n \ge \Omega(n^{1/2})$ when the decay in the eigen-values is exponentially fast. We also study the full information setting when the underlying losses are kernel functions and present an adapted exponential weights algorithm and a conditional gradient descent algorithm.
APA
Chatterji, N., Pacchiano, A. & Bartlett, P.. (2019). Online learning with kernel losses. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:971-980 Available from https://proceedings.mlr.press/v97/chatterji19a.html.

Related Material