NetGAN without GAN: From Random Walks to Low-Rank Approximations

Luca Rendsburg, Holger Heidrich, Ulrike Von Luxburg
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:8073-8082, 2020.

Abstract

A graph generative model takes a graph as input and is supposed to generate new graphs that “look like” the input graph. While most classical models focus on few, hand-selected graph statistics and are too simplistic to reproduce real-world graphs, NetGAN recently emerged as an attractive alternative: by training a GAN to learn the random walk distribution of the input graph, the algorithm is able to reproduce a large number of important network patterns simultaneously, without explicitly specifying any of them. In this paper, we investigate the implicit bias of NetGAN. We find that the root of its generalization properties does not lie in the GAN architecture, but in an inconspicuous low-rank approximation of the logits random walk transition matrix. Step by step we can strip NetGAN of all unnecessary parts, including the GAN, and obtain a highly simplified reformulation that achieves comparable generalization results, but is orders of magnitudes faster and easier to adapt. Being much simpler on the conceptual side, we reveal the implicit inductive bias of the algorithm — an important step towards increasing the interpretability, transparency and acceptance of machine learning systems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-rendsburg20a, title = {{N}et{GAN} without {GAN}: From Random Walks to Low-Rank Approximations}, author = {Rendsburg, Luca and Heidrich, Holger and Luxburg, Ulrike Von}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {8073--8082}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/rendsburg20a/rendsburg20a.pdf}, url = {https://proceedings.mlr.press/v119/rendsburg20a.html}, abstract = {A graph generative model takes a graph as input and is supposed to generate new graphs that “look like” the input graph. While most classical models focus on few, hand-selected graph statistics and are too simplistic to reproduce real-world graphs, NetGAN recently emerged as an attractive alternative: by training a GAN to learn the random walk distribution of the input graph, the algorithm is able to reproduce a large number of important network patterns simultaneously, without explicitly specifying any of them. In this paper, we investigate the implicit bias of NetGAN. We find that the root of its generalization properties does not lie in the GAN architecture, but in an inconspicuous low-rank approximation of the logits random walk transition matrix. Step by step we can strip NetGAN of all unnecessary parts, including the GAN, and obtain a highly simplified reformulation that achieves comparable generalization results, but is orders of magnitudes faster and easier to adapt. Being much simpler on the conceptual side, we reveal the implicit inductive bias of the algorithm — an important step towards increasing the interpretability, transparency and acceptance of machine learning systems.} }
Endnote
%0 Conference Paper %T NetGAN without GAN: From Random Walks to Low-Rank Approximations %A Luca Rendsburg %A Holger Heidrich %A Ulrike Von Luxburg %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-rendsburg20a %I PMLR %P 8073--8082 %U https://proceedings.mlr.press/v119/rendsburg20a.html %V 119 %X A graph generative model takes a graph as input and is supposed to generate new graphs that “look like” the input graph. While most classical models focus on few, hand-selected graph statistics and are too simplistic to reproduce real-world graphs, NetGAN recently emerged as an attractive alternative: by training a GAN to learn the random walk distribution of the input graph, the algorithm is able to reproduce a large number of important network patterns simultaneously, without explicitly specifying any of them. In this paper, we investigate the implicit bias of NetGAN. We find that the root of its generalization properties does not lie in the GAN architecture, but in an inconspicuous low-rank approximation of the logits random walk transition matrix. Step by step we can strip NetGAN of all unnecessary parts, including the GAN, and obtain a highly simplified reformulation that achieves comparable generalization results, but is orders of magnitudes faster and easier to adapt. Being much simpler on the conceptual side, we reveal the implicit inductive bias of the algorithm — an important step towards increasing the interpretability, transparency and acceptance of machine learning systems.
APA
Rendsburg, L., Heidrich, H. & Luxburg, U.V.. (2020). NetGAN without GAN: From Random Walks to Low-Rank Approximations. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:8073-8082 Available from https://proceedings.mlr.press/v119/rendsburg20a.html.

Related Material