Semi-supervised Learning by Higher Order Regularization

Xueyuan Zhou, Mikhail Belkin
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, PMLR 15:892-900, 2011.

Abstract

In semi-supervised learning, at the limit of infinite unlabeled points while fixing labeled ones, the solutions of several graph Laplacian regularization based algorithms were shown by Nadler et al. (2009) to degenerate to constant functions with “spikes” at labeled points in $\mathbb{R}^d$ for $d \ge 2$. These optimization problems all use the graph Laplacian regularizer as a common penalty term. In this paper, we address this problem by using regularization based on an iterated Laplacian, which is equivalent to a higher order Sobolev semi-norm. Alternatively, it can be viewed as a generalization of the thin plate spline to an unknown submanifold in high dimensions. We also discuss relationships between Reproducing Kernel Hilbert Spaces and Green’s functions. Experimental results support our analysis by showing consistently improved results using iterated Laplacians.

Cite this Paper


BibTeX
@InProceedings{pmlr-v15-zhou11b, title = {Semi-supervised Learning by Higher Order Regularization}, author = {Zhou, Xueyuan and Belkin, Mikhail}, booktitle = {Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics}, pages = {892--900}, year = {2011}, editor = {Gordon, Geoffrey and Dunson, David and Dudík, Miroslav}, volume = {15}, series = {Proceedings of Machine Learning Research}, address = {Fort Lauderdale, FL, USA}, month = {11--13 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v15/zhou11b/zhou11b.pdf}, url = {https://proceedings.mlr.press/v15/zhou11b.html}, abstract = {In semi-supervised learning, at the limit of infinite unlabeled points while fixing labeled ones, the solutions of several graph Laplacian regularization based algorithms were shown by Nadler et al. (2009) to degenerate to constant functions with “spikes” at labeled points in $\mathbb{R}^d$ for $d \ge 2$. These optimization problems all use the graph Laplacian regularizer as a common penalty term. In this paper, we address this problem by using regularization based on an iterated Laplacian, which is equivalent to a higher order Sobolev semi-norm. Alternatively, it can be viewed as a generalization of the thin plate spline to an unknown submanifold in high dimensions. We also discuss relationships between Reproducing Kernel Hilbert Spaces and Green’s functions. Experimental results support our analysis by showing consistently improved results using iterated Laplacians.} }
Endnote
%0 Conference Paper %T Semi-supervised Learning by Higher Order Regularization %A Xueyuan Zhou %A Mikhail Belkin %B Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2011 %E Geoffrey Gordon %E David Dunson %E Miroslav Dudík %F pmlr-v15-zhou11b %I PMLR %P 892--900 %U https://proceedings.mlr.press/v15/zhou11b.html %V 15 %X In semi-supervised learning, at the limit of infinite unlabeled points while fixing labeled ones, the solutions of several graph Laplacian regularization based algorithms were shown by Nadler et al. (2009) to degenerate to constant functions with “spikes” at labeled points in $\mathbb{R}^d$ for $d \ge 2$. These optimization problems all use the graph Laplacian regularizer as a common penalty term. In this paper, we address this problem by using regularization based on an iterated Laplacian, which is equivalent to a higher order Sobolev semi-norm. Alternatively, it can be viewed as a generalization of the thin plate spline to an unknown submanifold in high dimensions. We also discuss relationships between Reproducing Kernel Hilbert Spaces and Green’s functions. Experimental results support our analysis by showing consistently improved results using iterated Laplacians.
RIS
TY - CPAPER TI - Semi-supervised Learning by Higher Order Regularization AU - Xueyuan Zhou AU - Mikhail Belkin BT - Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics DA - 2011/06/14 ED - Geoffrey Gordon ED - David Dunson ED - Miroslav Dudík ID - pmlr-v15-zhou11b PB - PMLR DP - Proceedings of Machine Learning Research VL - 15 SP - 892 EP - 900 L1 - http://proceedings.mlr.press/v15/zhou11b/zhou11b.pdf UR - https://proceedings.mlr.press/v15/zhou11b.html AB - In semi-supervised learning, at the limit of infinite unlabeled points while fixing labeled ones, the solutions of several graph Laplacian regularization based algorithms were shown by Nadler et al. (2009) to degenerate to constant functions with “spikes” at labeled points in $\mathbb{R}^d$ for $d \ge 2$. These optimization problems all use the graph Laplacian regularizer as a common penalty term. In this paper, we address this problem by using regularization based on an iterated Laplacian, which is equivalent to a higher order Sobolev semi-norm. Alternatively, it can be viewed as a generalization of the thin plate spline to an unknown submanifold in high dimensions. We also discuss relationships between Reproducing Kernel Hilbert Spaces and Green’s functions. Experimental results support our analysis by showing consistently improved results using iterated Laplacians. ER -
APA
Zhou, X. & Belkin, M.. (2011). Semi-supervised Learning by Higher Order Regularization. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 15:892-900 Available from https://proceedings.mlr.press/v15/zhou11b.html.

Related Material