Online Ensemble Multi-kernel Learning Adaptive to Non-stationary and Adversarial Environments
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:2037-2046, 2018.
Kernel-based methods exhibit well-documented performance in various nonlinear learning tasks. Most of them rely on a preselected kernel, whose prudent choice presumes task-specific prior information. To cope with this limitation, multi-kernel learning has gained popularity thanks to its flexibility in choosing kernels from a prescribed kernel dictionary. Leveraging the random feature approximation and its recent orthogonality-promoting variant, the present contribution develops an online multi-kernel learning scheme to infer the intended nonlinear function ‘on the fly.’ To further boost performance in non-stationary environments, an adaptive multi-kernel learning scheme is developed with affordable computation and memory complexity. Performance is analyzed in terms of both static and dynamic regret. To our best knowledge, AdaRaker is the first algorithm that can optimally track nonlinear functions in non-stationary settings with strong theoretical guarantees. Numerical tests on real datasets are carried out to showcase the effectiveness of the proposed algorithms.