Matrix-Free Preconditioning in Online Learning

Ashok Cutkosky, Tamas Sarlos
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:1455-1464, 2019.

Abstract

We provide an online convex optimization algorithm with regret that interpolates between the regret of an algorithm using an optimal preconditioning matrix and one using a diagonal preconditioning matrix. Our regret bound is never worse than that obtained by diagonal preconditioning, and in certain setting even surpasses that of algorithms with full-matrix preconditioning. Importantly, our algorithm runs in the same time and space complexity as online gradient descent. Along the way we incorporate new techniques that mildly streamline and improve logarithmic factors in prior regret analyses. We conclude by benchmarking our algorithm on synthetic data and deep learning tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-cutkosky19b, title = {Matrix-Free Preconditioning in Online Learning}, author = {Cutkosky, Ashok and Sarlos, Tamas}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {1455--1464}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/cutkosky19b/cutkosky19b.pdf}, url = {https://proceedings.mlr.press/v97/cutkosky19b.html}, abstract = {We provide an online convex optimization algorithm with regret that interpolates between the regret of an algorithm using an optimal preconditioning matrix and one using a diagonal preconditioning matrix. Our regret bound is never worse than that obtained by diagonal preconditioning, and in certain setting even surpasses that of algorithms with full-matrix preconditioning. Importantly, our algorithm runs in the same time and space complexity as online gradient descent. Along the way we incorporate new techniques that mildly streamline and improve logarithmic factors in prior regret analyses. We conclude by benchmarking our algorithm on synthetic data and deep learning tasks.} }
Endnote
%0 Conference Paper %T Matrix-Free Preconditioning in Online Learning %A Ashok Cutkosky %A Tamas Sarlos %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-cutkosky19b %I PMLR %P 1455--1464 %U https://proceedings.mlr.press/v97/cutkosky19b.html %V 97 %X We provide an online convex optimization algorithm with regret that interpolates between the regret of an algorithm using an optimal preconditioning matrix and one using a diagonal preconditioning matrix. Our regret bound is never worse than that obtained by diagonal preconditioning, and in certain setting even surpasses that of algorithms with full-matrix preconditioning. Importantly, our algorithm runs in the same time and space complexity as online gradient descent. Along the way we incorporate new techniques that mildly streamline and improve logarithmic factors in prior regret analyses. We conclude by benchmarking our algorithm on synthetic data and deep learning tasks.
APA
Cutkosky, A. & Sarlos, T.. (2019). Matrix-Free Preconditioning in Online Learning. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:1455-1464 Available from https://proceedings.mlr.press/v97/cutkosky19b.html.

Related Material