Online Ensemble Multi-kernel Learning Adaptive to Non-stationary and Adversarial Environments

Yanning Shen, Tianyi Chen, Georgios Giannakis
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:2037-2046, 2018.

Abstract

Kernel-based methods exhibit well-documented performance in various nonlinear learning tasks. Most of them rely on a preselected kernel, whose prudent choice presumes task-specific prior information. To cope with this limitation, multi-kernel learning has gained popularity thanks to its flexibility in choosing kernels from a prescribed kernel dictionary. Leveraging the random feature approximation and its recent orthogonality-promoting variant, the present contribution develops an online multi-kernel learning scheme to infer the intended nonlinear function ‘on the fly.’ To further boost performance in non-stationary environments, an adaptive multi-kernel learning scheme is developed with affordable computation and memory complexity. Performance is analyzed in terms of both static and dynamic regret. To our best knowledge, AdaRaker is the first algorithm that can optimally track nonlinear functions in non-stationary settings with strong theoretical guarantees. Numerical tests on real datasets are carried out to showcase the effectiveness of the proposed algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v84-shen18a, title = {Online Ensemble Multi-kernel Learning Adaptive to Non-stationary and Adversarial Environments}, author = {Shen, Yanning and Chen, Tianyi and Giannakis, Georgios}, booktitle = {Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics}, pages = {2037--2046}, year = {2018}, editor = {Storkey, Amos and Perez-Cruz, Fernando}, volume = {84}, series = {Proceedings of Machine Learning Research}, month = {09--11 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v84/shen18a/shen18a.pdf}, url = {https://proceedings.mlr.press/v84/shen18a.html}, abstract = {Kernel-based methods exhibit well-documented performance in various nonlinear learning tasks. Most of them rely on a preselected kernel, whose prudent choice presumes task-specific prior information. To cope with this limitation, multi-kernel learning has gained popularity thanks to its flexibility in choosing kernels from a prescribed kernel dictionary. Leveraging the random feature approximation and its recent orthogonality-promoting variant, the present contribution develops an online multi-kernel learning scheme to infer the intended nonlinear function ‘on the fly.’ To further boost performance in non-stationary environments, an adaptive multi-kernel learning scheme is developed with affordable computation and memory complexity. Performance is analyzed in terms of both static and dynamic regret. To our best knowledge, AdaRaker is the first algorithm that can optimally track nonlinear functions in non-stationary settings with strong theoretical guarantees. Numerical tests on real datasets are carried out to showcase the effectiveness of the proposed algorithms.} }
Endnote
%0 Conference Paper %T Online Ensemble Multi-kernel Learning Adaptive to Non-stationary and Adversarial Environments %A Yanning Shen %A Tianyi Chen %A Georgios Giannakis %B Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2018 %E Amos Storkey %E Fernando Perez-Cruz %F pmlr-v84-shen18a %I PMLR %P 2037--2046 %U https://proceedings.mlr.press/v84/shen18a.html %V 84 %X Kernel-based methods exhibit well-documented performance in various nonlinear learning tasks. Most of them rely on a preselected kernel, whose prudent choice presumes task-specific prior information. To cope with this limitation, multi-kernel learning has gained popularity thanks to its flexibility in choosing kernels from a prescribed kernel dictionary. Leveraging the random feature approximation and its recent orthogonality-promoting variant, the present contribution develops an online multi-kernel learning scheme to infer the intended nonlinear function ‘on the fly.’ To further boost performance in non-stationary environments, an adaptive multi-kernel learning scheme is developed with affordable computation and memory complexity. Performance is analyzed in terms of both static and dynamic regret. To our best knowledge, AdaRaker is the first algorithm that can optimally track nonlinear functions in non-stationary settings with strong theoretical guarantees. Numerical tests on real datasets are carried out to showcase the effectiveness of the proposed algorithms.
APA
Shen, Y., Chen, T. & Giannakis, G.. (2018). Online Ensemble Multi-kernel Learning Adaptive to Non-stationary and Adversarial Environments. Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 84:2037-2046 Available from https://proceedings.mlr.press/v84/shen18a.html.

Related Material