Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement

Andrew Ross, Finale Doshi-Velez
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:9084-9094, 2021.

Abstract

In representation learning, there has been recent interest in developing algorithms to disentangle the ground-truth generative factors behind a dataset, and metrics to quantify how fully this occurs. However, these algorithms and metrics often assume that both representations and ground-truth factors are flat, continuous, and factorized, whereas many real-world generative processes involve rich hierarchical structure, mixtures of discrete and continuous variables with dependence between them, and even varying intrinsic dimensionality. In this work, we develop benchmarks, algorithms, and metrics for learning such hierarchical representations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-ross21a, title = {Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement}, author = {Ross, Andrew and Doshi-Velez, Finale}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {9084--9094}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/ross21a/ross21a.pdf}, url = {https://proceedings.mlr.press/v139/ross21a.html}, abstract = {In representation learning, there has been recent interest in developing algorithms to disentangle the ground-truth generative factors behind a dataset, and metrics to quantify how fully this occurs. However, these algorithms and metrics often assume that both representations and ground-truth factors are flat, continuous, and factorized, whereas many real-world generative processes involve rich hierarchical structure, mixtures of discrete and continuous variables with dependence between them, and even varying intrinsic dimensionality. In this work, we develop benchmarks, algorithms, and metrics for learning such hierarchical representations.} }
Endnote
%0 Conference Paper %T Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement %A Andrew Ross %A Finale Doshi-Velez %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-ross21a %I PMLR %P 9084--9094 %U https://proceedings.mlr.press/v139/ross21a.html %V 139 %X In representation learning, there has been recent interest in developing algorithms to disentangle the ground-truth generative factors behind a dataset, and metrics to quantify how fully this occurs. However, these algorithms and metrics often assume that both representations and ground-truth factors are flat, continuous, and factorized, whereas many real-world generative processes involve rich hierarchical structure, mixtures of discrete and continuous variables with dependence between them, and even varying intrinsic dimensionality. In this work, we develop benchmarks, algorithms, and metrics for learning such hierarchical representations.
APA
Ross, A. & Doshi-Velez, F.. (2021). Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:9084-9094 Available from https://proceedings.mlr.press/v139/ross21a.html.

Related Material