Graph Embedding with Shifted Inner Product Similarity and Its Improved Approximation Capability

Akifumi Okuno, Geewook Kim, Hidetoshi Shimodaira
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:644-653, 2019.

Abstract

We propose shifted inner-product similarity (SIPS), which is a novel yet very simple extension of the ordinary inner-product similarity (IPS) for neural-network based graph embedding (GE). In contrast to IPS, that is limited to approximating positive-definite (PD) similarities, SIPS goes beyond the limitation by introducing bias terms in IPS; we theoretically prove that SIPS is capable of approximating not only PD but also conditionally PD (CPD) similarities with many examples such as cosine similarity, negative Poincare distance and negative Wasserstein distance. Since SIPS with sufficiently large neural networks learns a variety of similarities, SIPS alleviates the need for configuring the similarity function of GE. Approximation error rate is also evaluated, and experiments on two real-world datasets demonstrate that graph embedding using SIPS indeed outperforms existing methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v89-okuno19a, title = {Graph Embedding with Shifted Inner Product Similarity and Its Improved Approximation Capability}, author = {Okuno, Akifumi and Kim, Geewook and Shimodaira, Hidetoshi}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics}, pages = {644--653}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/okuno19a/okuno19a.pdf}, url = {https://proceedings.mlr.press/v89/okuno19a.html}, abstract = {We propose shifted inner-product similarity (SIPS), which is a novel yet very simple extension of the ordinary inner-product similarity (IPS) for neural-network based graph embedding (GE). In contrast to IPS, that is limited to approximating positive-definite (PD) similarities, SIPS goes beyond the limitation by introducing bias terms in IPS; we theoretically prove that SIPS is capable of approximating not only PD but also conditionally PD (CPD) similarities with many examples such as cosine similarity, negative Poincare distance and negative Wasserstein distance. Since SIPS with sufficiently large neural networks learns a variety of similarities, SIPS alleviates the need for configuring the similarity function of GE. Approximation error rate is also evaluated, and experiments on two real-world datasets demonstrate that graph embedding using SIPS indeed outperforms existing methods.} }
Endnote
%0 Conference Paper %T Graph Embedding with Shifted Inner Product Similarity and Its Improved Approximation Capability %A Akifumi Okuno %A Geewook Kim %A Hidetoshi Shimodaira %B Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Masashi Sugiyama %F pmlr-v89-okuno19a %I PMLR %P 644--653 %U https://proceedings.mlr.press/v89/okuno19a.html %V 89 %X We propose shifted inner-product similarity (SIPS), which is a novel yet very simple extension of the ordinary inner-product similarity (IPS) for neural-network based graph embedding (GE). In contrast to IPS, that is limited to approximating positive-definite (PD) similarities, SIPS goes beyond the limitation by introducing bias terms in IPS; we theoretically prove that SIPS is capable of approximating not only PD but also conditionally PD (CPD) similarities with many examples such as cosine similarity, negative Poincare distance and negative Wasserstein distance. Since SIPS with sufficiently large neural networks learns a variety of similarities, SIPS alleviates the need for configuring the similarity function of GE. Approximation error rate is also evaluated, and experiments on two real-world datasets demonstrate that graph embedding using SIPS indeed outperforms existing methods.
APA
Okuno, A., Kim, G. & Shimodaira, H.. (2019). Graph Embedding with Shifted Inner Product Similarity and Its Improved Approximation Capability. Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 89:644-653 Available from https://proceedings.mlr.press/v89/okuno19a.html.

Related Material