Large-margin Weakly Supervised Dimensionality Reduction

Chang Xu, Dacheng Tao, Chao Xu, Yong Rui
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):865-873, 2014.

Abstract

This paper studies dimensionality reduction in a weakly supervised setting, in which the preference relationship between examples is indicated by weak cues. A novel framework is proposed that integrates two aspects of the large margin principle (angle and distance), which simultaneously encourage angle consistency between preference pairs and maximize the distance between examples in preference pairs. Two specific algorithms are developed: an alternating direction method to learn a linear transformation matrix and a gradient boosting technique to optimize a non-linear transformation directly in the function space. Theoretical analysis demonstrates that the proposed large margin optimization criteria can strengthen and improve the robustness and generalization performance of preference learning algorithms on the obtained low-dimensional subspace. Experimental results on real-world datasets demonstrate the significance of studying dimensionality reduction in the weakly supervised setting and the effectiveness of the proposed framework.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-xu14, title = {Large-margin Weakly Supervised Dimensionality Reduction}, author = {Xu, Chang and Tao, Dacheng and Xu, Chao and Rui, Yong}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {865--873}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/xu14.pdf}, url = {https://proceedings.mlr.press/v32/xu14.html}, abstract = {This paper studies dimensionality reduction in a weakly supervised setting, in which the preference relationship between examples is indicated by weak cues. A novel framework is proposed that integrates two aspects of the large margin principle (angle and distance), which simultaneously encourage angle consistency between preference pairs and maximize the distance between examples in preference pairs. Two specific algorithms are developed: an alternating direction method to learn a linear transformation matrix and a gradient boosting technique to optimize a non-linear transformation directly in the function space. Theoretical analysis demonstrates that the proposed large margin optimization criteria can strengthen and improve the robustness and generalization performance of preference learning algorithms on the obtained low-dimensional subspace. Experimental results on real-world datasets demonstrate the significance of studying dimensionality reduction in the weakly supervised setting and the effectiveness of the proposed framework.} }
Endnote
%0 Conference Paper %T Large-margin Weakly Supervised Dimensionality Reduction %A Chang Xu %A Dacheng Tao %A Chao Xu %A Yong Rui %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-xu14 %I PMLR %P 865--873 %U https://proceedings.mlr.press/v32/xu14.html %V 32 %N 2 %X This paper studies dimensionality reduction in a weakly supervised setting, in which the preference relationship between examples is indicated by weak cues. A novel framework is proposed that integrates two aspects of the large margin principle (angle and distance), which simultaneously encourage angle consistency between preference pairs and maximize the distance between examples in preference pairs. Two specific algorithms are developed: an alternating direction method to learn a linear transformation matrix and a gradient boosting technique to optimize a non-linear transformation directly in the function space. Theoretical analysis demonstrates that the proposed large margin optimization criteria can strengthen and improve the robustness and generalization performance of preference learning algorithms on the obtained low-dimensional subspace. Experimental results on real-world datasets demonstrate the significance of studying dimensionality reduction in the weakly supervised setting and the effectiveness of the proposed framework.
RIS
TY - CPAPER TI - Large-margin Weakly Supervised Dimensionality Reduction AU - Chang Xu AU - Dacheng Tao AU - Chao Xu AU - Yong Rui BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-xu14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 865 EP - 873 L1 - http://proceedings.mlr.press/v32/xu14.pdf UR - https://proceedings.mlr.press/v32/xu14.html AB - This paper studies dimensionality reduction in a weakly supervised setting, in which the preference relationship between examples is indicated by weak cues. A novel framework is proposed that integrates two aspects of the large margin principle (angle and distance), which simultaneously encourage angle consistency between preference pairs and maximize the distance between examples in preference pairs. Two specific algorithms are developed: an alternating direction method to learn a linear transformation matrix and a gradient boosting technique to optimize a non-linear transformation directly in the function space. Theoretical analysis demonstrates that the proposed large margin optimization criteria can strengthen and improve the robustness and generalization performance of preference learning algorithms on the obtained low-dimensional subspace. Experimental results on real-world datasets demonstrate the significance of studying dimensionality reduction in the weakly supervised setting and the effectiveness of the proposed framework. ER -
APA
Xu, C., Tao, D., Xu, C. & Rui, Y.. (2014). Large-margin Weakly Supervised Dimensionality Reduction. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):865-873 Available from https://proceedings.mlr.press/v32/xu14.html.

Related Material