Efficiently Disentangle Causal Representations

Yuanpeng Li, Joel Hestness, Mohamed Elhoseiny, Liang Zhao, Kenneth Church
Conference on Parsimony and Learning, PMLR 234:54-71, 2024.

Abstract

This paper proposes an efficient approach to learning disentangled representations with causal mechanisms based on the difference of conditional probabilities in original and new distributions. We approximate the difference with models’ generalization abilities so that it fits in the standard machine learning framework and can be computed efficiently. In contrast to the state-of-the-art approach, which relies on the learner’s adaptation speed to new distribution, the proposed approach only requires evaluating the model’s generalization ability. We provide a theoretical explanation for the advantage of the proposed method, and our experiments show that the proposed technique is 1.9–11.0$\times$ more sample efficient and 9.4–32.4$\times$ quicker than the previous method on various tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v234-li24a, title = {Efficiently Disentangle Causal Representations}, author = {Li, Yuanpeng and Hestness, Joel and Elhoseiny, Mohamed and Zhao, Liang and Church, Kenneth}, booktitle = {Conference on Parsimony and Learning}, pages = {54--71}, year = {2024}, editor = {Chi, Yuejie and Dziugaite, Gintare Karolina and Qu, Qing and Wang, Atlas Wang and Zhu, Zhihui}, volume = {234}, series = {Proceedings of Machine Learning Research}, month = {03--06 Jan}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v234/li24a/li24a.pdf}, url = {https://proceedings.mlr.press/v234/li24a.html}, abstract = {This paper proposes an efficient approach to learning disentangled representations with causal mechanisms based on the difference of conditional probabilities in original and new distributions. We approximate the difference with models’ generalization abilities so that it fits in the standard machine learning framework and can be computed efficiently. In contrast to the state-of-the-art approach, which relies on the learner’s adaptation speed to new distribution, the proposed approach only requires evaluating the model’s generalization ability. We provide a theoretical explanation for the advantage of the proposed method, and our experiments show that the proposed technique is 1.9–11.0$\times$ more sample efficient and 9.4–32.4$\times$ quicker than the previous method on various tasks.} }
Endnote
%0 Conference Paper %T Efficiently Disentangle Causal Representations %A Yuanpeng Li %A Joel Hestness %A Mohamed Elhoseiny %A Liang Zhao %A Kenneth Church %B Conference on Parsimony and Learning %C Proceedings of Machine Learning Research %D 2024 %E Yuejie Chi %E Gintare Karolina Dziugaite %E Qing Qu %E Atlas Wang Wang %E Zhihui Zhu %F pmlr-v234-li24a %I PMLR %P 54--71 %U https://proceedings.mlr.press/v234/li24a.html %V 234 %X This paper proposes an efficient approach to learning disentangled representations with causal mechanisms based on the difference of conditional probabilities in original and new distributions. We approximate the difference with models’ generalization abilities so that it fits in the standard machine learning framework and can be computed efficiently. In contrast to the state-of-the-art approach, which relies on the learner’s adaptation speed to new distribution, the proposed approach only requires evaluating the model’s generalization ability. We provide a theoretical explanation for the advantage of the proposed method, and our experiments show that the proposed technique is 1.9–11.0$\times$ more sample efficient and 9.4–32.4$\times$ quicker than the previous method on various tasks.
APA
Li, Y., Hestness, J., Elhoseiny, M., Zhao, L. & Church, K.. (2024). Efficiently Disentangle Causal Representations. Conference on Parsimony and Learning, in Proceedings of Machine Learning Research 234:54-71 Available from https://proceedings.mlr.press/v234/li24a.html.

Related Material