Semi-Supervised Learning via Generalized Maximum Entropy

Ayse Erkan, Yasemin Altun
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, PMLR 9:209-216, 2010.

Abstract

Various supervised inference methods can be analyzed as convex duals of the generalized maximum entropy (MaxEnt) framework. Generalized MaxEnt aims to find a distribution that maximizes an entropy function while respecting prior information represented as potential functions in miscellaneous forms of constraints and/or penalties. We extend this framework to semi-supervised learning by incorporating unlabeled data via modifications to these potential functions reflecting structural assumptions on the data geometry. The proposed approach leads to a family of discriminative semi-supervised algorithms, that are convex, scalable, inherently multi-class, easy to implement, and that can be kernelized naturally. Experimental evaluation of special cases shows the competitiveness of our methodology.

Cite this Paper


BibTeX
@InProceedings{pmlr-v9-erkan10a, title = {Semi-Supervised Learning via Generalized Maximum Entropy}, author = {Erkan, Ayse and Altun, Yasemin}, booktitle = {Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics}, pages = {209--216}, year = {2010}, editor = {Teh, Yee Whye and Titterington, Mike}, volume = {9}, series = {Proceedings of Machine Learning Research}, address = {Chia Laguna Resort, Sardinia, Italy}, month = {13--15 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v9/erkan10a/erkan10a.pdf}, url = {https://proceedings.mlr.press/v9/erkan10a.html}, abstract = {Various supervised inference methods can be analyzed as convex duals of the generalized maximum entropy (MaxEnt) framework. Generalized MaxEnt aims to find a distribution that maximizes an entropy function while respecting prior information represented as potential functions in miscellaneous forms of constraints and/or penalties. We extend this framework to semi-supervised learning by incorporating unlabeled data via modifications to these potential functions reflecting structural assumptions on the data geometry. The proposed approach leads to a family of discriminative semi-supervised algorithms, that are convex, scalable, inherently multi-class, easy to implement, and that can be kernelized naturally. Experimental evaluation of special cases shows the competitiveness of our methodology.} }
Endnote
%0 Conference Paper %T Semi-Supervised Learning via Generalized Maximum Entropy %A Ayse Erkan %A Yasemin Altun %B Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2010 %E Yee Whye Teh %E Mike Titterington %F pmlr-v9-erkan10a %I PMLR %P 209--216 %U https://proceedings.mlr.press/v9/erkan10a.html %V 9 %X Various supervised inference methods can be analyzed as convex duals of the generalized maximum entropy (MaxEnt) framework. Generalized MaxEnt aims to find a distribution that maximizes an entropy function while respecting prior information represented as potential functions in miscellaneous forms of constraints and/or penalties. We extend this framework to semi-supervised learning by incorporating unlabeled data via modifications to these potential functions reflecting structural assumptions on the data geometry. The proposed approach leads to a family of discriminative semi-supervised algorithms, that are convex, scalable, inherently multi-class, easy to implement, and that can be kernelized naturally. Experimental evaluation of special cases shows the competitiveness of our methodology.
RIS
TY - CPAPER TI - Semi-Supervised Learning via Generalized Maximum Entropy AU - Ayse Erkan AU - Yasemin Altun BT - Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics DA - 2010/03/31 ED - Yee Whye Teh ED - Mike Titterington ID - pmlr-v9-erkan10a PB - PMLR DP - Proceedings of Machine Learning Research VL - 9 SP - 209 EP - 216 L1 - http://proceedings.mlr.press/v9/erkan10a/erkan10a.pdf UR - https://proceedings.mlr.press/v9/erkan10a.html AB - Various supervised inference methods can be analyzed as convex duals of the generalized maximum entropy (MaxEnt) framework. Generalized MaxEnt aims to find a distribution that maximizes an entropy function while respecting prior information represented as potential functions in miscellaneous forms of constraints and/or penalties. We extend this framework to semi-supervised learning by incorporating unlabeled data via modifications to these potential functions reflecting structural assumptions on the data geometry. The proposed approach leads to a family of discriminative semi-supervised algorithms, that are convex, scalable, inherently multi-class, easy to implement, and that can be kernelized naturally. Experimental evaluation of special cases shows the competitiveness of our methodology. ER -
APA
Erkan, A. & Altun, Y.. (2010). Semi-Supervised Learning via Generalized Maximum Entropy. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 9:209-216 Available from https://proceedings.mlr.press/v9/erkan10a.html.

Related Material