Outofsample extension of graph adjacency spectral embedding
[edit]
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:29752984, 2018.
Abstract
Many popular dimensionality reduction procedures have outofsample extensions, which allow a practitioner to apply a learned embedding to observations not seen in the initial training sample. In this work, we consider the problem of obtaining an outofsample extension for the adjacency spectral embedding, a procedure for embedding the vertices of a graph into Euclidean space. We present two different approaches to this problem, one based on a leastsquares objective and the other based on a maximumlikelihood formulation. We show that if the graph of interest is drawn according to a certain latent position model called a random dot product graph, then both of these outofsample extensions estimate the true latent position of the outofsample vertex with the same error rate. Further, we prove a central limit theorem for the leastsquaresbased extension, showing that the estimate is asymptotically normal about the truth in the largegraph limit.
Related Material


