Learning Independent Causal Mechanisms

Giambattista Parascandolo, Niki Kilbertus, Mateo Rojas-Carulla, Bernhard Schölkopf
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4036-4044, 2018.

Abstract

Statistical learning relies upon data sampled from a distribution, and we usually do not care what actually generated it in the first place. From the point of view of causal modeling, the structure of each distribution is induced by physical mechanisms that give rise to dependences between observables. Mechanisms, however, can be meaningful autonomous modules of generative models that make sense beyond a particular entailed data distribution, lending themselves to transfer between problems. We develop an algorithm to recover a set of independent (inverse) mechanisms from a set of transformed data points. The approach is unsupervised and based on a set of experts that compete for data generated by the mechanisms, driving specialization. We analyze the proposed method in a series of experiments on image data. Each expert learns to map a subset of the transformed data back to a reference distribution. The learned mechanisms generalize to novel domains. We discuss implications for transfer learning and links to recent trends in generative modeling.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-parascandolo18a, title = {Learning Independent Causal Mechanisms}, author = {Parascandolo, Giambattista and Kilbertus, Niki and Rojas-Carulla, Mateo and Sch{\"o}lkopf, Bernhard}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {4036--4044}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/parascandolo18a/parascandolo18a.pdf}, url = {https://proceedings.mlr.press/v80/parascandolo18a.html}, abstract = {Statistical learning relies upon data sampled from a distribution, and we usually do not care what actually generated it in the first place. From the point of view of causal modeling, the structure of each distribution is induced by physical mechanisms that give rise to dependences between observables. Mechanisms, however, can be meaningful autonomous modules of generative models that make sense beyond a particular entailed data distribution, lending themselves to transfer between problems. We develop an algorithm to recover a set of independent (inverse) mechanisms from a set of transformed data points. The approach is unsupervised and based on a set of experts that compete for data generated by the mechanisms, driving specialization. We analyze the proposed method in a series of experiments on image data. Each expert learns to map a subset of the transformed data back to a reference distribution. The learned mechanisms generalize to novel domains. We discuss implications for transfer learning and links to recent trends in generative modeling.} }
Endnote
%0 Conference Paper %T Learning Independent Causal Mechanisms %A Giambattista Parascandolo %A Niki Kilbertus %A Mateo Rojas-Carulla %A Bernhard Schölkopf %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-parascandolo18a %I PMLR %P 4036--4044 %U https://proceedings.mlr.press/v80/parascandolo18a.html %V 80 %X Statistical learning relies upon data sampled from a distribution, and we usually do not care what actually generated it in the first place. From the point of view of causal modeling, the structure of each distribution is induced by physical mechanisms that give rise to dependences between observables. Mechanisms, however, can be meaningful autonomous modules of generative models that make sense beyond a particular entailed data distribution, lending themselves to transfer between problems. We develop an algorithm to recover a set of independent (inverse) mechanisms from a set of transformed data points. The approach is unsupervised and based on a set of experts that compete for data generated by the mechanisms, driving specialization. We analyze the proposed method in a series of experiments on image data. Each expert learns to map a subset of the transformed data back to a reference distribution. The learned mechanisms generalize to novel domains. We discuss implications for transfer learning and links to recent trends in generative modeling.
APA
Parascandolo, G., Kilbertus, N., Rojas-Carulla, M. & Schölkopf, B.. (2018). Learning Independent Causal Mechanisms. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:4036-4044 Available from https://proceedings.mlr.press/v80/parascandolo18a.html.

Related Material