Tighter and Convex Maximum Margin Clustering

Yu-Feng Li, Ivor W. Tsang, Jame Kwok, Zhi-Hua Zhou
Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, PMLR 5:344-351, 2009.

Abstract

Maximum margin principle has been successfully applied to many supervised and semi-supervised problems in machine learning. Recently, this principle was extended for clustering, referred to as Maximum Margin Clustering (MMC) and achieved promising performance in recent studies. To avoid the problem of local minima, MMC can be solved globally via convex semi-definite programming (SDP) relaxation. Although many efficient approaches have been proposed to alleviate the computational burden of SDP, convex MMCs are still not scalable for medium data sets. In this paper, we propose a novel convex optimization method, LG-MMC, which maximizes the margin of opposite clusters via label generation. It can be shown that LG-MMC is much more scalable than existing convex approaches. Moreover, we show that our convex relaxation is tighter than state-of-art convex MMCs. Experiments on eighteen UCI datasets and MNIST dataset show significant improvement over existing MMC algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v5-li09c, title = {Tighter and Convex Maximum Margin Clustering}, author = {Li, Yu-Feng and Tsang, Ivor W. and Kwok, Jame and Zhou, Zhi-Hua}, booktitle = {Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics}, pages = {344--351}, year = {2009}, editor = {van Dyk, David and Welling, Max}, volume = {5}, series = {Proceedings of Machine Learning Research}, address = {Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v5/li09c/li09c.pdf}, url = {https://proceedings.mlr.press/v5/li09c.html}, abstract = {Maximum margin principle has been successfully applied to many supervised and semi-supervised problems in machine learning. Recently, this principle was extended for clustering, referred to as Maximum Margin Clustering (MMC) and achieved promising performance in recent studies. To avoid the problem of local minima, MMC can be solved globally via convex semi-definite programming (SDP) relaxation. Although many efficient approaches have been proposed to alleviate the computational burden of SDP, convex MMCs are still not scalable for medium data sets. In this paper, we propose a novel convex optimization method, LG-MMC, which maximizes the margin of opposite clusters via label generation. It can be shown that LG-MMC is much more scalable than existing convex approaches. Moreover, we show that our convex relaxation is tighter than state-of-art convex MMCs. Experiments on eighteen UCI datasets and MNIST dataset show significant improvement over existing MMC algorithms.} }
Endnote
%0 Conference Paper %T Tighter and Convex Maximum Margin Clustering %A Yu-Feng Li %A Ivor W. Tsang %A Jame Kwok %A Zhi-Hua Zhou %B Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2009 %E David van Dyk %E Max Welling %F pmlr-v5-li09c %I PMLR %P 344--351 %U https://proceedings.mlr.press/v5/li09c.html %V 5 %X Maximum margin principle has been successfully applied to many supervised and semi-supervised problems in machine learning. Recently, this principle was extended for clustering, referred to as Maximum Margin Clustering (MMC) and achieved promising performance in recent studies. To avoid the problem of local minima, MMC can be solved globally via convex semi-definite programming (SDP) relaxation. Although many efficient approaches have been proposed to alleviate the computational burden of SDP, convex MMCs are still not scalable for medium data sets. In this paper, we propose a novel convex optimization method, LG-MMC, which maximizes the margin of opposite clusters via label generation. It can be shown that LG-MMC is much more scalable than existing convex approaches. Moreover, we show that our convex relaxation is tighter than state-of-art convex MMCs. Experiments on eighteen UCI datasets and MNIST dataset show significant improvement over existing MMC algorithms.
RIS
TY - CPAPER TI - Tighter and Convex Maximum Margin Clustering AU - Yu-Feng Li AU - Ivor W. Tsang AU - Jame Kwok AU - Zhi-Hua Zhou BT - Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics DA - 2009/04/15 ED - David van Dyk ED - Max Welling ID - pmlr-v5-li09c PB - PMLR DP - Proceedings of Machine Learning Research VL - 5 SP - 344 EP - 351 L1 - http://proceedings.mlr.press/v5/li09c/li09c.pdf UR - https://proceedings.mlr.press/v5/li09c.html AB - Maximum margin principle has been successfully applied to many supervised and semi-supervised problems in machine learning. Recently, this principle was extended for clustering, referred to as Maximum Margin Clustering (MMC) and achieved promising performance in recent studies. To avoid the problem of local minima, MMC can be solved globally via convex semi-definite programming (SDP) relaxation. Although many efficient approaches have been proposed to alleviate the computational burden of SDP, convex MMCs are still not scalable for medium data sets. In this paper, we propose a novel convex optimization method, LG-MMC, which maximizes the margin of opposite clusters via label generation. It can be shown that LG-MMC is much more scalable than existing convex approaches. Moreover, we show that our convex relaxation is tighter than state-of-art convex MMCs. Experiments on eighteen UCI datasets and MNIST dataset show significant improvement over existing MMC algorithms. ER -
APA
Li, Y., Tsang, I.W., Kwok, J. & Zhou, Z.. (2009). Tighter and Convex Maximum Margin Clustering. Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 5:344-351 Available from https://proceedings.mlr.press/v5/li09c.html.

Related Material