Active Learning as Non-Convex Optimization

Andrew Guillory, Erick Chastain, Jeff Bilmes
; Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, PMLR 5:201-208, 2009.

Abstract

We propose a new view of active learning algorithms as optimization. We show that many online active learning algorithms can be viewed as stochastic gradient descent on non-convex objective functions. Variations of some of these algorithms and objective functions have been previously proposed without noting this connection. We also point out a connection between the standard min-margin offline active learning algorithm and non-convex losses. Finally, we discuss and show empirically how viewing active learning as non-convex loss minimization helps explain two previously observed phenomena: certain active learning algorithms achieve better generalization error than passive learning algorithms on certain data sets and on other data sets many active learning algorithms are prone to local minima.

Cite this Paper


BibTeX
@InProceedings{pmlr-v5-guillory09a, title = {Active Learning as Non-Convex Optimization}, author = {Andrew Guillory and Erick Chastain and Jeff Bilmes}, booktitle = {Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics}, pages = {201--208}, year = {2009}, editor = {David van Dyk and Max Welling}, volume = {5}, series = {Proceedings of Machine Learning Research}, address = {Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v5/guillory09a/guillory09a.pdf}, url = {http://proceedings.mlr.press/v5/guillory09a.html}, abstract = {We propose a new view of active learning algorithms as optimization. We show that many online active learning algorithms can be viewed as stochastic gradient descent on non-convex objective functions. Variations of some of these algorithms and objective functions have been previously proposed without noting this connection. We also point out a connection between the standard min-margin offline active learning algorithm and non-convex losses. Finally, we discuss and show empirically how viewing active learning as non-convex loss minimization helps explain two previously observed phenomena: certain active learning algorithms achieve better generalization error than passive learning algorithms on certain data sets and on other data sets many active learning algorithms are prone to local minima.} }
Endnote
%0 Conference Paper %T Active Learning as Non-Convex Optimization %A Andrew Guillory %A Erick Chastain %A Jeff Bilmes %B Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2009 %E David van Dyk %E Max Welling %F pmlr-v5-guillory09a %I PMLR %J Proceedings of Machine Learning Research %P 201--208 %U http://proceedings.mlr.press %V 5 %W PMLR %X We propose a new view of active learning algorithms as optimization. We show that many online active learning algorithms can be viewed as stochastic gradient descent on non-convex objective functions. Variations of some of these algorithms and objective functions have been previously proposed without noting this connection. We also point out a connection between the standard min-margin offline active learning algorithm and non-convex losses. Finally, we discuss and show empirically how viewing active learning as non-convex loss minimization helps explain two previously observed phenomena: certain active learning algorithms achieve better generalization error than passive learning algorithms on certain data sets and on other data sets many active learning algorithms are prone to local minima.
RIS
TY - CPAPER TI - Active Learning as Non-Convex Optimization AU - Andrew Guillory AU - Erick Chastain AU - Jeff Bilmes BT - Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics PY - 2009/04/15 DA - 2009/04/15 ED - David van Dyk ED - Max Welling ID - pmlr-v5-guillory09a PB - PMLR SP - 201 DP - PMLR EP - 208 L1 - http://proceedings.mlr.press/v5/guillory09a/guillory09a.pdf UR - http://proceedings.mlr.press/v5/guillory09a.html AB - We propose a new view of active learning algorithms as optimization. We show that many online active learning algorithms can be viewed as stochastic gradient descent on non-convex objective functions. Variations of some of these algorithms and objective functions have been previously proposed without noting this connection. We also point out a connection between the standard min-margin offline active learning algorithm and non-convex losses. Finally, we discuss and show empirically how viewing active learning as non-convex loss minimization helps explain two previously observed phenomena: certain active learning algorithms achieve better generalization error than passive learning algorithms on certain data sets and on other data sets many active learning algorithms are prone to local minima. ER -
APA
Guillory, A., Chastain, E. & Bilmes, J.. (2009). Active Learning as Non-Convex Optimization. Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, in PMLR 5:201-208

Related Material