A Bayesian Framework for Online Classifier Ensemble

[edit]

Qinxun Bai, Henry Lam, Stan Sclaroff ;
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1584-1592, 2014.

Abstract

We propose a Bayesian framework for recursively estimating the classifier weights in online learning of a classifier ensemble. In contrast with past methods, such as stochastic gradient descent or online boosting, our framework estimates the weights in terms of evolving posterior distributions. For a specified class of loss functions, we show that it is possible to formulate a suitably defined likelihood function and hence use the posterior distribution as an approximation to the global empirical loss minimizer. If the stream of training data is sampled from a stationary process, we can also show that our framework admits a superior rate of convergence to the expected loss minimizer than is possible with standard stochastic gradient descent. In experiments with real-world datasets, our formulation often performs better than online boosting algorithms.

Related Material