Graph Embedding with Shifted Inner Product Similarity and Its Improved Approximation Capability
[edit]
Proceedings of Machine Learning Research, PMLR 89:644653, 2019.
Abstract
We propose shifted innerproduct similarity (SIPS), which is a novel yet very simple extension of the ordinary innerproduct similarity (IPS) for neuralnetwork based graph embedding (GE). In contrast to IPS, that is limited to approximating positivedefinite (PD) similarities, SIPS goes beyond the limitation by introducing bias terms in IPS; we theoretically prove that SIPS is capable of approximating not only PD but also conditionally PD (CPD) similarities with many examples such as cosine similarity, negative Poincare distance and negative Wasserstein distance. Since SIPS with sufficiently large neural networks learns a variety of similarities, SIPS alleviates the need for configuring the similarity function of GE. Approximation error rate is also evaluated, and experiments on two realworld datasets demonstrate that graph embedding using SIPS indeed outperforms existing methods.
Related Material


