Connecting Interpretability and Robustness in Decision Trees through Separation

Michal Moshkovitz, Yao-Yuan Yang, Kamalika Chaudhuri
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:7839-7849, 2021.

Abstract

Recent research has recognized interpretability and robustness as essential properties of trustworthy classification. Curiously, a connection between robustness and interpretability was empirically observed, but the theoretical reasoning behind it remained elusive. In this paper, we rigorously investigate this connection. Specifically, we focus on interpretation using decision trees and robustness to l_{\infty}-perturbation. Previous works defined the notion of r-separation as a sufficient condition for robustness. We prove upper and lower bounds on the tree size in case the data is r-separated. We then show that a tighter bound on the size is possible when the data is linearly separated. We provide the first algorithm with provable guarantees both on robustness, interpretability, and accuracy in the context of decision trees. Experiments confirm that our algorithm yields classifiers that are both interpretable and robust and have high accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-moshkovitz21a, title = {Connecting Interpretability and Robustness in Decision Trees through Separation}, author = {Moshkovitz, Michal and Yang, Yao-Yuan and Chaudhuri, Kamalika}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {7839--7849}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/moshkovitz21a/moshkovitz21a.pdf}, url = {https://proceedings.mlr.press/v139/moshkovitz21a.html}, abstract = {Recent research has recognized interpretability and robustness as essential properties of trustworthy classification. Curiously, a connection between robustness and interpretability was empirically observed, but the theoretical reasoning behind it remained elusive. In this paper, we rigorously investigate this connection. Specifically, we focus on interpretation using decision trees and robustness to l_{\infty}-perturbation. Previous works defined the notion of r-separation as a sufficient condition for robustness. We prove upper and lower bounds on the tree size in case the data is r-separated. We then show that a tighter bound on the size is possible when the data is linearly separated. We provide the first algorithm with provable guarantees both on robustness, interpretability, and accuracy in the context of decision trees. Experiments confirm that our algorithm yields classifiers that are both interpretable and robust and have high accuracy.} }
Endnote
%0 Conference Paper %T Connecting Interpretability and Robustness in Decision Trees through Separation %A Michal Moshkovitz %A Yao-Yuan Yang %A Kamalika Chaudhuri %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-moshkovitz21a %I PMLR %P 7839--7849 %U https://proceedings.mlr.press/v139/moshkovitz21a.html %V 139 %X Recent research has recognized interpretability and robustness as essential properties of trustworthy classification. Curiously, a connection between robustness and interpretability was empirically observed, but the theoretical reasoning behind it remained elusive. In this paper, we rigorously investigate this connection. Specifically, we focus on interpretation using decision trees and robustness to l_{\infty}-perturbation. Previous works defined the notion of r-separation as a sufficient condition for robustness. We prove upper and lower bounds on the tree size in case the data is r-separated. We then show that a tighter bound on the size is possible when the data is linearly separated. We provide the first algorithm with provable guarantees both on robustness, interpretability, and accuracy in the context of decision trees. Experiments confirm that our algorithm yields classifiers that are both interpretable and robust and have high accuracy.
APA
Moshkovitz, M., Yang, Y. & Chaudhuri, K.. (2021). Connecting Interpretability and Robustness in Decision Trees through Separation. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:7839-7849 Available from https://proceedings.mlr.press/v139/moshkovitz21a.html.

Related Material