Explaining Groups of Points in Low-Dimensional Representations

Gregory Plumb, Jonathan Terhorst, Sriram Sankararaman, Ameet Talwalkar
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:7762-7771, 2020.

Abstract

A common workflow in data exploration is to learn a low-dimensional representation of the data, identify groups of points in that representation, and examine the differences between the groups to determine what they represent. We treat this workflow as an interpretable machine learning problem by leveraging the model that learned the low-dimensional representation to help identify the key differences between the groups. To solve this problem, we introduce a new type of explanation, a Global Counterfactual Explanation (GCE), and our algorithm, Transitive Global Translations (TGT), for computing GCEs. TGT identifies the differences between each pair of groups using compressed sensing but constrains those pairwise differences to be consistent among all of the groups. Empirically, we demonstrate that TGT is able to identify explanations that accurately explain the model while being relatively sparse, and that these explanations match real patterns in the data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-plumb20a, title = {Explaining Groups of Points in Low-Dimensional Representations}, author = {Plumb, Gregory and Terhorst, Jonathan and Sankararaman, Sriram and Talwalkar, Ameet}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {7762--7771}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/plumb20a/plumb20a.pdf}, url = {https://proceedings.mlr.press/v119/plumb20a.html}, abstract = {A common workflow in data exploration is to learn a low-dimensional representation of the data, identify groups of points in that representation, and examine the differences between the groups to determine what they represent. We treat this workflow as an interpretable machine learning problem by leveraging the model that learned the low-dimensional representation to help identify the key differences between the groups. To solve this problem, we introduce a new type of explanation, a Global Counterfactual Explanation (GCE), and our algorithm, Transitive Global Translations (TGT), for computing GCEs. TGT identifies the differences between each pair of groups using compressed sensing but constrains those pairwise differences to be consistent among all of the groups. Empirically, we demonstrate that TGT is able to identify explanations that accurately explain the model while being relatively sparse, and that these explanations match real patterns in the data.} }
Endnote
%0 Conference Paper %T Explaining Groups of Points in Low-Dimensional Representations %A Gregory Plumb %A Jonathan Terhorst %A Sriram Sankararaman %A Ameet Talwalkar %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-plumb20a %I PMLR %P 7762--7771 %U https://proceedings.mlr.press/v119/plumb20a.html %V 119 %X A common workflow in data exploration is to learn a low-dimensional representation of the data, identify groups of points in that representation, and examine the differences between the groups to determine what they represent. We treat this workflow as an interpretable machine learning problem by leveraging the model that learned the low-dimensional representation to help identify the key differences between the groups. To solve this problem, we introduce a new type of explanation, a Global Counterfactual Explanation (GCE), and our algorithm, Transitive Global Translations (TGT), for computing GCEs. TGT identifies the differences between each pair of groups using compressed sensing but constrains those pairwise differences to be consistent among all of the groups. Empirically, we demonstrate that TGT is able to identify explanations that accurately explain the model while being relatively sparse, and that these explanations match real patterns in the data.
APA
Plumb, G., Terhorst, J., Sankararaman, S. & Talwalkar, A.. (2020). Explaining Groups of Points in Low-Dimensional Representations. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:7762-7771 Available from https://proceedings.mlr.press/v119/plumb20a.html.

Related Material