Cluster Canonical Correlation Analysis

Nikhil Rasiwasia, Dhruv Mahajan, Vijay Mahadevan, Gaurav Aggarwal
Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, PMLR 33:823-831, 2014.

Abstract

In this paper we present cluster canonical correlation analysis (cluster-CCA) for joint dimensionality reduction of two sets of data points. Unlike the standard pairwise correspondence between the data points, in our problem each set is partitioned into multiple clusters or classes, where the class labels define correspondences between the sets. Cluster-CCA is able to learn discriminant low dimensional representations that maximize the correlation between the two sets while segregating the different classes on the learned space. Furthermore, we present a kernel extension, kernel cluster canonical correlation analysis (cluster-KCCA) that extends cluster-CCA to account for non-linear relationships. Cluster-(K)CCA is shown to be computationally efficient, the complexity being similar to standard (K)CCA. By means of experimental evaluation on benchmark datasets, cluster-(K)CCA is shown to achieve state of the art performance for cross-modal retrieval tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v33-rasiwasia14, title = {{Cluster Canonical Correlation Analysis}}, author = {Rasiwasia, Nikhil and Mahajan, Dhruv and Mahadevan, Vijay and Aggarwal, Gaurav}, booktitle = {Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics}, pages = {823--831}, year = {2014}, editor = {Kaski, Samuel and Corander, Jukka}, volume = {33}, series = {Proceedings of Machine Learning Research}, address = {Reykjavik, Iceland}, month = {22--25 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v33/rasiwasia14.pdf}, url = {https://proceedings.mlr.press/v33/rasiwasia14.html}, abstract = {In this paper we present cluster canonical correlation analysis (cluster-CCA) for joint dimensionality reduction of two sets of data points. Unlike the standard pairwise correspondence between the data points, in our problem each set is partitioned into multiple clusters or classes, where the class labels define correspondences between the sets. Cluster-CCA is able to learn discriminant low dimensional representations that maximize the correlation between the two sets while segregating the different classes on the learned space. Furthermore, we present a kernel extension, kernel cluster canonical correlation analysis (cluster-KCCA) that extends cluster-CCA to account for non-linear relationships. Cluster-(K)CCA is shown to be computationally efficient, the complexity being similar to standard (K)CCA. By means of experimental evaluation on benchmark datasets, cluster-(K)CCA is shown to achieve state of the art performance for cross-modal retrieval tasks.} }
Endnote
%0 Conference Paper %T Cluster Canonical Correlation Analysis %A Nikhil Rasiwasia %A Dhruv Mahajan %A Vijay Mahadevan %A Gaurav Aggarwal %B Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2014 %E Samuel Kaski %E Jukka Corander %F pmlr-v33-rasiwasia14 %I PMLR %P 823--831 %U https://proceedings.mlr.press/v33/rasiwasia14.html %V 33 %X In this paper we present cluster canonical correlation analysis (cluster-CCA) for joint dimensionality reduction of two sets of data points. Unlike the standard pairwise correspondence between the data points, in our problem each set is partitioned into multiple clusters or classes, where the class labels define correspondences between the sets. Cluster-CCA is able to learn discriminant low dimensional representations that maximize the correlation between the two sets while segregating the different classes on the learned space. Furthermore, we present a kernel extension, kernel cluster canonical correlation analysis (cluster-KCCA) that extends cluster-CCA to account for non-linear relationships. Cluster-(K)CCA is shown to be computationally efficient, the complexity being similar to standard (K)CCA. By means of experimental evaluation on benchmark datasets, cluster-(K)CCA is shown to achieve state of the art performance for cross-modal retrieval tasks.
RIS
TY - CPAPER TI - Cluster Canonical Correlation Analysis AU - Nikhil Rasiwasia AU - Dhruv Mahajan AU - Vijay Mahadevan AU - Gaurav Aggarwal BT - Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics DA - 2014/04/02 ED - Samuel Kaski ED - Jukka Corander ID - pmlr-v33-rasiwasia14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 33 SP - 823 EP - 831 L1 - http://proceedings.mlr.press/v33/rasiwasia14.pdf UR - https://proceedings.mlr.press/v33/rasiwasia14.html AB - In this paper we present cluster canonical correlation analysis (cluster-CCA) for joint dimensionality reduction of two sets of data points. Unlike the standard pairwise correspondence between the data points, in our problem each set is partitioned into multiple clusters or classes, where the class labels define correspondences between the sets. Cluster-CCA is able to learn discriminant low dimensional representations that maximize the correlation between the two sets while segregating the different classes on the learned space. Furthermore, we present a kernel extension, kernel cluster canonical correlation analysis (cluster-KCCA) that extends cluster-CCA to account for non-linear relationships. Cluster-(K)CCA is shown to be computationally efficient, the complexity being similar to standard (K)CCA. By means of experimental evaluation on benchmark datasets, cluster-(K)CCA is shown to achieve state of the art performance for cross-modal retrieval tasks. ER -
APA
Rasiwasia, N., Mahajan, D., Mahadevan, V. & Aggarwal, G.. (2014). Cluster Canonical Correlation Analysis. Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 33:823-831 Available from https://proceedings.mlr.press/v33/rasiwasia14.html.

Related Material