Active Learning as Non-Convex Optimization

[edit]

Andrew Guillory, Erick Chastain, Jeff Bilmes ;
Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, PMLR 5:201-208, 2009.

Abstract

We propose a new view of active learning algorithms as optimization. We show that many online active learning algorithms can be viewed as stochastic gradient descent on non-convex objective functions. Variations of some of these algorithms and objective functions have been previously proposed without noting this connection. We also point out a connection between the standard min-margin offline active learning algorithm and non-convex losses. Finally, we discuss and show empirically how viewing active learning as non-convex loss minimization helps explain two previously observed phenomena: certain active learning algorithms achieve better generalization error than passive learning algorithms on certain data sets and on other data sets many active learning algorithms are prone to local minima.

Related Material