Robust Decision Trees Against Adversarial Examples

Hongge Chen, Huan Zhang, Duane Boning, Cho-Jui Hsieh
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:1122-1131, 2019.

Abstract

Although adversarial examples and model robust-ness have been extensively studied in the context of neural networks, research on this issue in tree-based models and how to make tree-based models robust against adversarial examples is still limited. In this paper, we show that tree-based models are also vulnerable to adversarial examples and develop a novel algorithm to learn robust trees. At its core, our method aims to optimize the performance under the worst-case perturbation of input features, which leads to a max-min saddle point problem. Incorporating this saddle point objective into the decision tree building procedure is non-trivial due to the discrete nature of trees{—}a naive approach to finding the best split according to this saddle point objective will take exponential time. To make our approach practical and scalable, we propose efficient tree building algorithms by approximating the inner minimizer in the saddlepoint problem, and present efficient implementations for classical information gain based trees as well as state-of-the-art tree boosting systems such as XGBoost. Experimental results on real world datasets demonstrate that the proposed algorithms can significantly improve the robustness of tree-based models against adversarial examples.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-chen19m, title = {Robust Decision Trees Against Adversarial Examples}, author = {Chen, Hongge and Zhang, Huan and Boning, Duane and Hsieh, Cho-Jui}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {1122--1131}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/chen19m/chen19m.pdf}, url = {https://proceedings.mlr.press/v97/chen19m.html}, abstract = {Although adversarial examples and model robust-ness have been extensively studied in the context of neural networks, research on this issue in tree-based models and how to make tree-based models robust against adversarial examples is still limited. In this paper, we show that tree-based models are also vulnerable to adversarial examples and develop a novel algorithm to learn robust trees. At its core, our method aims to optimize the performance under the worst-case perturbation of input features, which leads to a max-min saddle point problem. Incorporating this saddle point objective into the decision tree building procedure is non-trivial due to the discrete nature of trees{—}a naive approach to finding the best split according to this saddle point objective will take exponential time. To make our approach practical and scalable, we propose efficient tree building algorithms by approximating the inner minimizer in the saddlepoint problem, and present efficient implementations for classical information gain based trees as well as state-of-the-art tree boosting systems such as XGBoost. Experimental results on real world datasets demonstrate that the proposed algorithms can significantly improve the robustness of tree-based models against adversarial examples.} }
Endnote
%0 Conference Paper %T Robust Decision Trees Against Adversarial Examples %A Hongge Chen %A Huan Zhang %A Duane Boning %A Cho-Jui Hsieh %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-chen19m %I PMLR %P 1122--1131 %U https://proceedings.mlr.press/v97/chen19m.html %V 97 %X Although adversarial examples and model robust-ness have been extensively studied in the context of neural networks, research on this issue in tree-based models and how to make tree-based models robust against adversarial examples is still limited. In this paper, we show that tree-based models are also vulnerable to adversarial examples and develop a novel algorithm to learn robust trees. At its core, our method aims to optimize the performance under the worst-case perturbation of input features, which leads to a max-min saddle point problem. Incorporating this saddle point objective into the decision tree building procedure is non-trivial due to the discrete nature of trees{—}a naive approach to finding the best split according to this saddle point objective will take exponential time. To make our approach practical and scalable, we propose efficient tree building algorithms by approximating the inner minimizer in the saddlepoint problem, and present efficient implementations for classical information gain based trees as well as state-of-the-art tree boosting systems such as XGBoost. Experimental results on real world datasets demonstrate that the proposed algorithms can significantly improve the robustness of tree-based models against adversarial examples.
APA
Chen, H., Zhang, H., Boning, D. & Hsieh, C.. (2019). Robust Decision Trees Against Adversarial Examples. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:1122-1131 Available from https://proceedings.mlr.press/v97/chen19m.html.

Related Material