Discovering Interpretable Representations for Both Deep Generative and Discriminative Models

Tameem Adel, Zoubin Ghahramani, Adrian Weller
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:50-59, 2018.

Abstract

Interpretability of representations in both deep generative and discriminative models is highly desirable. Current methods jointly optimize an objective combining accuracy and interpretability. However, this may reduce accuracy, and is not applicable to already trained models. We propose two interpretability frameworks. First, we provide an interpretable lens for an existing model. We use a generative model which takes as input the representation in an existing (generative or discriminative) model, weakly supervised by limited side information. Applying a flexible and invertible transformation to the input leads to an interpretable representation with no loss in accuracy. We extend the approach using an active learning strategy to choose the most useful side information to obtain, allowing a human to guide what "interpretable" means. Our second framework relies on joint optimization for a representation which is both maximally informative about the side information and maximally compressive about the non-interpretable data factors. This leads to a novel perspective on the relationship between compression and regularization. We also propose a new interpretability evaluation metric based on our framework. Empirically, we achieve state-of-the-art results on three datasets using the two proposed algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-adel18a, title = {Discovering Interpretable Representations for Both Deep Generative and Discriminative Models}, author = {Adel, Tameem and Ghahramani, Zoubin and Weller, Adrian}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {50--59}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/adel18a/adel18a.pdf}, url = {https://proceedings.mlr.press/v80/adel18a.html}, abstract = {Interpretability of representations in both deep generative and discriminative models is highly desirable. Current methods jointly optimize an objective combining accuracy and interpretability. However, this may reduce accuracy, and is not applicable to already trained models. We propose two interpretability frameworks. First, we provide an interpretable lens for an existing model. We use a generative model which takes as input the representation in an existing (generative or discriminative) model, weakly supervised by limited side information. Applying a flexible and invertible transformation to the input leads to an interpretable representation with no loss in accuracy. We extend the approach using an active learning strategy to choose the most useful side information to obtain, allowing a human to guide what "interpretable" means. Our second framework relies on joint optimization for a representation which is both maximally informative about the side information and maximally compressive about the non-interpretable data factors. This leads to a novel perspective on the relationship between compression and regularization. We also propose a new interpretability evaluation metric based on our framework. Empirically, we achieve state-of-the-art results on three datasets using the two proposed algorithms.} }
Endnote
%0 Conference Paper %T Discovering Interpretable Representations for Both Deep Generative and Discriminative Models %A Tameem Adel %A Zoubin Ghahramani %A Adrian Weller %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-adel18a %I PMLR %P 50--59 %U https://proceedings.mlr.press/v80/adel18a.html %V 80 %X Interpretability of representations in both deep generative and discriminative models is highly desirable. Current methods jointly optimize an objective combining accuracy and interpretability. However, this may reduce accuracy, and is not applicable to already trained models. We propose two interpretability frameworks. First, we provide an interpretable lens for an existing model. We use a generative model which takes as input the representation in an existing (generative or discriminative) model, weakly supervised by limited side information. Applying a flexible and invertible transformation to the input leads to an interpretable representation with no loss in accuracy. We extend the approach using an active learning strategy to choose the most useful side information to obtain, allowing a human to guide what "interpretable" means. Our second framework relies on joint optimization for a representation which is both maximally informative about the side information and maximally compressive about the non-interpretable data factors. This leads to a novel perspective on the relationship between compression and regularization. We also propose a new interpretability evaluation metric based on our framework. Empirically, we achieve state-of-the-art results on three datasets using the two proposed algorithms.
APA
Adel, T., Ghahramani, Z. & Weller, A.. (2018). Discovering Interpretable Representations for Both Deep Generative and Discriminative Models. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:50-59 Available from https://proceedings.mlr.press/v80/adel18a.html.

Related Material