Learning Interpretable, Tree-Based Projection Mappings for Nonlinear Embeddings

Arman S. Zharmagambetov, Miguel A. Carreira-Perpinan
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:9550-9570, 2022.

Abstract

Model interpretability is a topic of renewed interest given today’s widespread practical use of machine learning, and the need to trust or understand automated predictions. We consider the problem of optimally learning interpretable out-of-sample mappings for nonlinear embedding methods such as $t$-SNE. We argue for the use of sparse oblique decision trees because they strike a good tradeoff between accuracy and interpretability which can be controlled via a hyperparameter, thus allowing one to achieve a model with a desired explanatory complexity. The resulting optimization problem is difficult because decision trees are not differentiable. By using an equivalent formulation of the problem, we give an algorithm that can learn such a tree for any given nonlinear embedding objective. We illustrate experimentally how the resulting trees provide insights into the data beyond what a simple 2D visualization of the embedding does.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-zharmagambetov22a, title = { Learning Interpretable, Tree-Based Projection Mappings for Nonlinear Embeddings }, author = {Zharmagambetov, Arman S. and Carreira-Perpinan, Miguel A.}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {9550--9570}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/zharmagambetov22a/zharmagambetov22a.pdf}, url = {https://proceedings.mlr.press/v151/zharmagambetov22a.html}, abstract = { Model interpretability is a topic of renewed interest given today’s widespread practical use of machine learning, and the need to trust or understand automated predictions. We consider the problem of optimally learning interpretable out-of-sample mappings for nonlinear embedding methods such as $t$-SNE. We argue for the use of sparse oblique decision trees because they strike a good tradeoff between accuracy and interpretability which can be controlled via a hyperparameter, thus allowing one to achieve a model with a desired explanatory complexity. The resulting optimization problem is difficult because decision trees are not differentiable. By using an equivalent formulation of the problem, we give an algorithm that can learn such a tree for any given nonlinear embedding objective. We illustrate experimentally how the resulting trees provide insights into the data beyond what a simple 2D visualization of the embedding does. } }
Endnote
%0 Conference Paper %T Learning Interpretable, Tree-Based Projection Mappings for Nonlinear Embeddings %A Arman S. Zharmagambetov %A Miguel A. Carreira-Perpinan %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-zharmagambetov22a %I PMLR %P 9550--9570 %U https://proceedings.mlr.press/v151/zharmagambetov22a.html %V 151 %X Model interpretability is a topic of renewed interest given today’s widespread practical use of machine learning, and the need to trust or understand automated predictions. We consider the problem of optimally learning interpretable out-of-sample mappings for nonlinear embedding methods such as $t$-SNE. We argue for the use of sparse oblique decision trees because they strike a good tradeoff between accuracy and interpretability which can be controlled via a hyperparameter, thus allowing one to achieve a model with a desired explanatory complexity. The resulting optimization problem is difficult because decision trees are not differentiable. By using an equivalent formulation of the problem, we give an algorithm that can learn such a tree for any given nonlinear embedding objective. We illustrate experimentally how the resulting trees provide insights into the data beyond what a simple 2D visualization of the embedding does.
APA
Zharmagambetov, A.S. & Carreira-Perpinan, M.A.. (2022). Learning Interpretable, Tree-Based Projection Mappings for Nonlinear Embeddings . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:9550-9570 Available from https://proceedings.mlr.press/v151/zharmagambetov22a.html.

Related Material