Multi-view learning over structured and non-identical outputs

Kuzman Ganchev, João V. Graça, John Blitzer, Ben Taskar
Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence, PMLR R6:204-211, 2008.

Abstract

In many machine learning problems, labeled training data is limited but unlabeled data is ample. Some of these problems have instances that can be factored into multiple views, each of which is nearly sufficent in determining the correct labels. In this paper we present a new algorithm for probabilistic multi-view learning which uses the idea of stochastic agreement between views as regularization. Our algorithm works on structured and unstructured problems and easily generalizes to partial agreement scenarios. For the full agreement case, our algorithm minimizes the Bhattacharyya distance between the models of each view, and performs better than CoBoosting and two-view Perceptron on several flat and structured classification problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-vR6-ganchev08a, title = {Multi-view learning over structured and non-identical outputs}, author = {Ganchev, Kuzman and Gra\c{c}a, Jo\~{a}o V. and Blitzer, John and Taskar, Ben}, booktitle = {Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence}, pages = {204--211}, year = {2008}, editor = {McAllester, David A. and Myllymäki, Petri}, volume = {R6}, series = {Proceedings of Machine Learning Research}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/r6/main/assets/ganchev08a/ganchev08a.pdf}, url = {https://proceedings.mlr.press/r6/ganchev08a.html}, abstract = {In many machine learning problems, labeled training data is limited but unlabeled data is ample. Some of these problems have instances that can be factored into multiple views, each of which is nearly sufficent in determining the correct labels. In this paper we present a new algorithm for probabilistic multi-view learning which uses the idea of stochastic agreement between views as regularization. Our algorithm works on structured and unstructured problems and easily generalizes to partial agreement scenarios. For the full agreement case, our algorithm minimizes the Bhattacharyya distance between the models of each view, and performs better than CoBoosting and two-view Perceptron on several flat and structured classification problems.}, note = {Reissued by PMLR on 09 October 2024.} }
Endnote
%0 Conference Paper %T Multi-view learning over structured and non-identical outputs %A Kuzman Ganchev %A João V. Graça %A John Blitzer %A Ben Taskar %B Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2008 %E David A. McAllester %E Petri Myllymäki %F pmlr-vR6-ganchev08a %I PMLR %P 204--211 %U https://proceedings.mlr.press/r6/ganchev08a.html %V R6 %X In many machine learning problems, labeled training data is limited but unlabeled data is ample. Some of these problems have instances that can be factored into multiple views, each of which is nearly sufficent in determining the correct labels. In this paper we present a new algorithm for probabilistic multi-view learning which uses the idea of stochastic agreement between views as regularization. Our algorithm works on structured and unstructured problems and easily generalizes to partial agreement scenarios. For the full agreement case, our algorithm minimizes the Bhattacharyya distance between the models of each view, and performs better than CoBoosting and two-view Perceptron on several flat and structured classification problems. %Z Reissued by PMLR on 09 October 2024.
APA
Ganchev, K., Graça, J.V., Blitzer, J. & Taskar, B.. (2008). Multi-view learning over structured and non-identical outputs. Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research R6:204-211 Available from https://proceedings.mlr.press/r6/ganchev08a.html. Reissued by PMLR on 09 October 2024.

Related Material