Large-margin Weakly Supervised Dimensionality Reduction

Chang Xu, Dacheng Tao, Chao Xu, Yong Rui
; Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):865-873, 2014.

Abstract

This paper studies dimensionality reduction in a weakly supervised setting, in which the preference relationship between examples is indicated by weak cues. A novel framework is proposed that integrates two aspects of the large margin principle (angle and distance), which simultaneously encourage angle consistency between preference pairs and maximize the distance between examples in preference pairs. Two specific algorithms are developed: an alternating direction method to learn a linear transformation matrix and a gradient boosting technique to optimize a non-linear transformation directly in the function space. Theoretical analysis demonstrates that the proposed large margin optimization criteria can strengthen and improve the robustness and generalization performance of preference learning algorithms on the obtained low-dimensional subspace. Experimental results on real-world datasets demonstrate the significance of studying dimensionality reduction in the weakly supervised setting and the effectiveness of the proposed framework.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-xu14, title = {Large-margin Weakly Supervised Dimensionality Reduction}, author = {Chang Xu and Dacheng Tao and Chao Xu and Yong Rui}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {865--873}, year = {2014}, editor = {Eric P. Xing and Tony Jebara}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/xu14.pdf}, url = {http://proceedings.mlr.press/v32/xu14.html}, abstract = {This paper studies dimensionality reduction in a weakly supervised setting, in which the preference relationship between examples is indicated by weak cues. A novel framework is proposed that integrates two aspects of the large margin principle (angle and distance), which simultaneously encourage angle consistency between preference pairs and maximize the distance between examples in preference pairs. Two specific algorithms are developed: an alternating direction method to learn a linear transformation matrix and a gradient boosting technique to optimize a non-linear transformation directly in the function space. Theoretical analysis demonstrates that the proposed large margin optimization criteria can strengthen and improve the robustness and generalization performance of preference learning algorithms on the obtained low-dimensional subspace. Experimental results on real-world datasets demonstrate the significance of studying dimensionality reduction in the weakly supervised setting and the effectiveness of the proposed framework.} }
Endnote
%0 Conference Paper %T Large-margin Weakly Supervised Dimensionality Reduction %A Chang Xu %A Dacheng Tao %A Chao Xu %A Yong Rui %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-xu14 %I PMLR %J Proceedings of Machine Learning Research %P 865--873 %U http://proceedings.mlr.press %V 32 %N 2 %W PMLR %X This paper studies dimensionality reduction in a weakly supervised setting, in which the preference relationship between examples is indicated by weak cues. A novel framework is proposed that integrates two aspects of the large margin principle (angle and distance), which simultaneously encourage angle consistency between preference pairs and maximize the distance between examples in preference pairs. Two specific algorithms are developed: an alternating direction method to learn a linear transformation matrix and a gradient boosting technique to optimize a non-linear transformation directly in the function space. Theoretical analysis demonstrates that the proposed large margin optimization criteria can strengthen and improve the robustness and generalization performance of preference learning algorithms on the obtained low-dimensional subspace. Experimental results on real-world datasets demonstrate the significance of studying dimensionality reduction in the weakly supervised setting and the effectiveness of the proposed framework.
RIS
TY - CPAPER TI - Large-margin Weakly Supervised Dimensionality Reduction AU - Chang Xu AU - Dacheng Tao AU - Chao Xu AU - Yong Rui BT - Proceedings of the 31st International Conference on Machine Learning PY - 2014/01/27 DA - 2014/01/27 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-xu14 PB - PMLR SP - 865 DP - PMLR EP - 873 L1 - http://proceedings.mlr.press/v32/xu14.pdf UR - http://proceedings.mlr.press/v32/xu14.html AB - This paper studies dimensionality reduction in a weakly supervised setting, in which the preference relationship between examples is indicated by weak cues. A novel framework is proposed that integrates two aspects of the large margin principle (angle and distance), which simultaneously encourage angle consistency between preference pairs and maximize the distance between examples in preference pairs. Two specific algorithms are developed: an alternating direction method to learn a linear transformation matrix and a gradient boosting technique to optimize a non-linear transformation directly in the function space. Theoretical analysis demonstrates that the proposed large margin optimization criteria can strengthen and improve the robustness and generalization performance of preference learning algorithms on the obtained low-dimensional subspace. Experimental results on real-world datasets demonstrate the significance of studying dimensionality reduction in the weakly supervised setting and the effectiveness of the proposed framework. ER -
APA
Xu, C., Tao, D., Xu, C. & Rui, Y.. (2014). Large-margin Weakly Supervised Dimensionality Reduction. Proceedings of the 31st International Conference on Machine Learning, in PMLR 32(2):865-873

Related Material