Two-Stage Metric Learning

Jun Wang, Ke Sun, Fei Sha, Stéphane Marchand-Maillet, Alexandros Kalousis
; Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):370-378, 2014.

Abstract

In this paper, we present a novel two-stage metric learning algorithm. We first map each learning instance to a probability distribution by computing its similarities to a set of fixed anchor points. Then, we define the distance in the input data space as the Fisher information distance on the associated statistical manifold. This induces in the input data space a new family of distance metric which presents unique properties. Unlike kernelized metric learning, we do not require the similarity measure to be positive semi-definite. Moreover, it can also be interpreted as a local metric learning algorithm with well defined distance approximation. We evaluate its performance on a number of datasets. It outperforms significantly other metric learning methods and SVM.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-wangc14, title = {Two-Stage Metric Learning}, author = {Jun Wang and Ke Sun and Fei Sha and Stéphane Marchand-Maillet and Alexandros Kalousis}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {370--378}, year = {2014}, editor = {Eric P. Xing and Tony Jebara}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/wangc14.pdf}, url = {http://proceedings.mlr.press/v32/wangc14.html}, abstract = {In this paper, we present a novel two-stage metric learning algorithm. We first map each learning instance to a probability distribution by computing its similarities to a set of fixed anchor points. Then, we define the distance in the input data space as the Fisher information distance on the associated statistical manifold. This induces in the input data space a new family of distance metric which presents unique properties. Unlike kernelized metric learning, we do not require the similarity measure to be positive semi-definite. Moreover, it can also be interpreted as a local metric learning algorithm with well defined distance approximation. We evaluate its performance on a number of datasets. It outperforms significantly other metric learning methods and SVM.} }
Endnote
%0 Conference Paper %T Two-Stage Metric Learning %A Jun Wang %A Ke Sun %A Fei Sha %A Stéphane Marchand-Maillet %A Alexandros Kalousis %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-wangc14 %I PMLR %J Proceedings of Machine Learning Research %P 370--378 %U http://proceedings.mlr.press %V 32 %N 2 %W PMLR %X In this paper, we present a novel two-stage metric learning algorithm. We first map each learning instance to a probability distribution by computing its similarities to a set of fixed anchor points. Then, we define the distance in the input data space as the Fisher information distance on the associated statistical manifold. This induces in the input data space a new family of distance metric which presents unique properties. Unlike kernelized metric learning, we do not require the similarity measure to be positive semi-definite. Moreover, it can also be interpreted as a local metric learning algorithm with well defined distance approximation. We evaluate its performance on a number of datasets. It outperforms significantly other metric learning methods and SVM.
RIS
TY - CPAPER TI - Two-Stage Metric Learning AU - Jun Wang AU - Ke Sun AU - Fei Sha AU - Stéphane Marchand-Maillet AU - Alexandros Kalousis BT - Proceedings of the 31st International Conference on Machine Learning PY - 2014/01/27 DA - 2014/01/27 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-wangc14 PB - PMLR SP - 370 DP - PMLR EP - 378 L1 - http://proceedings.mlr.press/v32/wangc14.pdf UR - http://proceedings.mlr.press/v32/wangc14.html AB - In this paper, we present a novel two-stage metric learning algorithm. We first map each learning instance to a probability distribution by computing its similarities to a set of fixed anchor points. Then, we define the distance in the input data space as the Fisher information distance on the associated statistical manifold. This induces in the input data space a new family of distance metric which presents unique properties. Unlike kernelized metric learning, we do not require the similarity measure to be positive semi-definite. Moreover, it can also be interpreted as a local metric learning algorithm with well defined distance approximation. We evaluate its performance on a number of datasets. It outperforms significantly other metric learning methods and SVM. ER -
APA
Wang, J., Sun, K., Sha, F., Marchand-Maillet, S. & Kalousis, A.. (2014). Two-Stage Metric Learning. Proceedings of the 31st International Conference on Machine Learning, in PMLR 32(2):370-378

Related Material