Online Adaptive Principal Component Analysis and Its extensions

Jianjun Yuan, Andrew Lamperski
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:7213-7221, 2019.

Abstract

We propose algorithms for online principal component analysis (PCA) and variance minimization for adaptive settings. Previous literature has focused on upper bounding the static adversarial regret, whose comparator is the optimal fixed action in hindsight. However, static regret is not an appropriate metric when the underlying environment is changing. Instead, we adopt the adaptive regret metric from the previous literature and propose online adaptive algorithms for PCA and variance minimization, that have sub-linear adaptive regret guarantees. We demonstrate both theoretically and experimentally that the proposed algorithms can adapt to the changing environments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-yuan19a, title = {Online Adaptive Principal Component Analysis and Its extensions}, author = {Yuan, Jianjun and Lamperski, Andrew}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {7213--7221}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/yuan19a/yuan19a.pdf}, url = {https://proceedings.mlr.press/v97/yuan19a.html}, abstract = {We propose algorithms for online principal component analysis (PCA) and variance minimization for adaptive settings. Previous literature has focused on upper bounding the static adversarial regret, whose comparator is the optimal fixed action in hindsight. However, static regret is not an appropriate metric when the underlying environment is changing. Instead, we adopt the adaptive regret metric from the previous literature and propose online adaptive algorithms for PCA and variance minimization, that have sub-linear adaptive regret guarantees. We demonstrate both theoretically and experimentally that the proposed algorithms can adapt to the changing environments.} }
Endnote
%0 Conference Paper %T Online Adaptive Principal Component Analysis and Its extensions %A Jianjun Yuan %A Andrew Lamperski %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-yuan19a %I PMLR %P 7213--7221 %U https://proceedings.mlr.press/v97/yuan19a.html %V 97 %X We propose algorithms for online principal component analysis (PCA) and variance minimization for adaptive settings. Previous literature has focused on upper bounding the static adversarial regret, whose comparator is the optimal fixed action in hindsight. However, static regret is not an appropriate metric when the underlying environment is changing. Instead, we adopt the adaptive regret metric from the previous literature and propose online adaptive algorithms for PCA and variance minimization, that have sub-linear adaptive regret guarantees. We demonstrate both theoretically and experimentally that the proposed algorithms can adapt to the changing environments.
APA
Yuan, J. & Lamperski, A.. (2019). Online Adaptive Principal Component Analysis and Its extensions. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:7213-7221 Available from https://proceedings.mlr.press/v97/yuan19a.html.

Related Material