Accuracy, Interpretability, and Differential Privacy via Explainable Boosting

Harsha Nori, Rich Caruana, Zhiqi Bu, Judy Hanwen Shen, Janardhan Kulkarni
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:8227-8237, 2021.

Abstract

We show that adding differential privacy to Explainable Boosting Machines (EBMs), a recent method for training interpretable ML models, yields state-of-the-art accuracy while protecting privacy. Our experiments on multiple classification and regression datasets show that DP-EBM models suffer surprisingly little accuracy loss even with strong differential privacy guarantees. In addition to high accuracy, two other benefits of applying DP to EBMs are: a) trained models provide exact global and local interpretability, which is often important in settings where differential privacy is needed; and b) the models can be edited after training without loss of privacy to correct errors which DP noise may have introduced.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-nori21a, title = {Accuracy, Interpretability, and Differential Privacy via Explainable Boosting}, author = {Nori, Harsha and Caruana, Rich and Bu, Zhiqi and Shen, Judy Hanwen and Kulkarni, Janardhan}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {8227--8237}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/nori21a/nori21a.pdf}, url = {https://proceedings.mlr.press/v139/nori21a.html}, abstract = {We show that adding differential privacy to Explainable Boosting Machines (EBMs), a recent method for training interpretable ML models, yields state-of-the-art accuracy while protecting privacy. Our experiments on multiple classification and regression datasets show that DP-EBM models suffer surprisingly little accuracy loss even with strong differential privacy guarantees. In addition to high accuracy, two other benefits of applying DP to EBMs are: a) trained models provide exact global and local interpretability, which is often important in settings where differential privacy is needed; and b) the models can be edited after training without loss of privacy to correct errors which DP noise may have introduced.} }
Endnote
%0 Conference Paper %T Accuracy, Interpretability, and Differential Privacy via Explainable Boosting %A Harsha Nori %A Rich Caruana %A Zhiqi Bu %A Judy Hanwen Shen %A Janardhan Kulkarni %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-nori21a %I PMLR %P 8227--8237 %U https://proceedings.mlr.press/v139/nori21a.html %V 139 %X We show that adding differential privacy to Explainable Boosting Machines (EBMs), a recent method for training interpretable ML models, yields state-of-the-art accuracy while protecting privacy. Our experiments on multiple classification and regression datasets show that DP-EBM models suffer surprisingly little accuracy loss even with strong differential privacy guarantees. In addition to high accuracy, two other benefits of applying DP to EBMs are: a) trained models provide exact global and local interpretability, which is often important in settings where differential privacy is needed; and b) the models can be edited after training without loss of privacy to correct errors which DP noise may have introduced.
APA
Nori, H., Caruana, R., Bu, Z., Shen, J.H. & Kulkarni, J.. (2021). Accuracy, Interpretability, and Differential Privacy via Explainable Boosting. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:8227-8237 Available from https://proceedings.mlr.press/v139/nori21a.html.

Related Material