Tighter and Convex Maximum Margin Clustering

Yu-Feng Li, Ivor W. Tsang, Jame Kwok, Zhi-Hua Zhou
; Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, PMLR 5:344-351, 2009.

Abstract

Maximum margin principle has been successfully applied to many supervised and semi-supervised problems in machine learning. Recently, this principle was extended for clustering, referred to as Maximum Margin Clustering (MMC) and achieved promising performance in recent studies. To avoid the problem of local minima, MMC can be solved globally via convex semi-definite programming (SDP) relaxation. Although many efficient approaches have been proposed to alleviate the computational burden of SDP, convex MMCs are still not scalable for medium data sets. In this paper, we propose a novel convex optimization method, LG-MMC, which maximizes the margin of opposite clusters via label generation. It can be shown that LG-MMC is much more scalable than existing convex approaches. Moreover, we show that our convex relaxation is tighter than state-of-art convex MMCs. Experiments on eighteen UCI datasets and MNIST dataset show significant improvement over existing MMC algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v5-li09c, title = {Tighter and Convex Maximum Margin Clustering}, author = {Yu-Feng Li and Ivor W. Tsang and Jame Kwok and Zhi-Hua Zhou}, booktitle = {Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics}, pages = {344--351}, year = {2009}, editor = {David van Dyk and Max Welling}, volume = {5}, series = {Proceedings of Machine Learning Research}, address = {Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v5/li09c/li09c.pdf}, url = {http://proceedings.mlr.press/v5/li09c.html}, abstract = {Maximum margin principle has been successfully applied to many supervised and semi-supervised problems in machine learning. Recently, this principle was extended for clustering, referred to as Maximum Margin Clustering (MMC) and achieved promising performance in recent studies. To avoid the problem of local minima, MMC can be solved globally via convex semi-definite programming (SDP) relaxation. Although many efficient approaches have been proposed to alleviate the computational burden of SDP, convex MMCs are still not scalable for medium data sets. In this paper, we propose a novel convex optimization method, LG-MMC, which maximizes the margin of opposite clusters via label generation. It can be shown that LG-MMC is much more scalable than existing convex approaches. Moreover, we show that our convex relaxation is tighter than state-of-art convex MMCs. Experiments on eighteen UCI datasets and MNIST dataset show significant improvement over existing MMC algorithms.} }
Endnote
%0 Conference Paper %T Tighter and Convex Maximum Margin Clustering %A Yu-Feng Li %A Ivor W. Tsang %A Jame Kwok %A Zhi-Hua Zhou %B Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2009 %E David van Dyk %E Max Welling %F pmlr-v5-li09c %I PMLR %J Proceedings of Machine Learning Research %P 344--351 %U http://proceedings.mlr.press %V 5 %W PMLR %X Maximum margin principle has been successfully applied to many supervised and semi-supervised problems in machine learning. Recently, this principle was extended for clustering, referred to as Maximum Margin Clustering (MMC) and achieved promising performance in recent studies. To avoid the problem of local minima, MMC can be solved globally via convex semi-definite programming (SDP) relaxation. Although many efficient approaches have been proposed to alleviate the computational burden of SDP, convex MMCs are still not scalable for medium data sets. In this paper, we propose a novel convex optimization method, LG-MMC, which maximizes the margin of opposite clusters via label generation. It can be shown that LG-MMC is much more scalable than existing convex approaches. Moreover, we show that our convex relaxation is tighter than state-of-art convex MMCs. Experiments on eighteen UCI datasets and MNIST dataset show significant improvement over existing MMC algorithms.
RIS
TY - CPAPER TI - Tighter and Convex Maximum Margin Clustering AU - Yu-Feng Li AU - Ivor W. Tsang AU - Jame Kwok AU - Zhi-Hua Zhou BT - Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics PY - 2009/04/15 DA - 2009/04/15 ED - David van Dyk ED - Max Welling ID - pmlr-v5-li09c PB - PMLR SP - 344 DP - PMLR EP - 351 L1 - http://proceedings.mlr.press/v5/li09c/li09c.pdf UR - http://proceedings.mlr.press/v5/li09c.html AB - Maximum margin principle has been successfully applied to many supervised and semi-supervised problems in machine learning. Recently, this principle was extended for clustering, referred to as Maximum Margin Clustering (MMC) and achieved promising performance in recent studies. To avoid the problem of local minima, MMC can be solved globally via convex semi-definite programming (SDP) relaxation. Although many efficient approaches have been proposed to alleviate the computational burden of SDP, convex MMCs are still not scalable for medium data sets. In this paper, we propose a novel convex optimization method, LG-MMC, which maximizes the margin of opposite clusters via label generation. It can be shown that LG-MMC is much more scalable than existing convex approaches. Moreover, we show that our convex relaxation is tighter than state-of-art convex MMCs. Experiments on eighteen UCI datasets and MNIST dataset show significant improvement over existing MMC algorithms. ER -
APA
Li, Y., Tsang, I.W., Kwok, J. & Zhou, Z.. (2009). Tighter and Convex Maximum Margin Clustering. Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, in PMLR 5:344-351

Related Material