Fairwashing: the risk of rationalization

Ulrich Aivodji, Hiromi Arai, Olivier Fortineau, Sébastien Gambs, Satoshi Hara, Alain Tapp
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:161-170, 2019.

Abstract

Black-box explanation is the problem of explaining how a machine learning model – whose internal logic is hidden to the auditor and generally complex – produces its outcomes. Current approaches for solving this problem include model explanation, outcome explanation as well as model inspection. While these techniques can be beneficial by providing interpretability, they can be used in a negative manner to perform fairwashing, which we define as promoting the false perception that a machine learning model respects some ethical values. In particular, we demonstrate that it is possible to systematically rationalize decisions taken by an unfair black-box model using the model explanation as well as the outcome explanation approaches with a given fairness metric. Our solution, LaundryML, is based on a regularized rule list enumeration algorithm whose objective is to search for fair rule lists approximating an unfair black-box model. We empirically evaluate our rationalization technique on black-box models trained on real-world datasets and show that one can obtain rule lists with high fidelity to the black-box model while being considerably less unfair at the same time.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-aivodji19a, title = {Fairwashing: the risk of rationalization}, author = {Aivodji, Ulrich and Arai, Hiromi and Fortineau, Olivier and Gambs, S{\'e}bastien and Hara, Satoshi and Tapp, Alain}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {161--170}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/aivodji19a/aivodji19a.pdf}, url = {https://proceedings.mlr.press/v97/aivodji19a.html}, abstract = {Black-box explanation is the problem of explaining how a machine learning model – whose internal logic is hidden to the auditor and generally complex – produces its outcomes. Current approaches for solving this problem include model explanation, outcome explanation as well as model inspection. While these techniques can be beneficial by providing interpretability, they can be used in a negative manner to perform fairwashing, which we define as promoting the false perception that a machine learning model respects some ethical values. In particular, we demonstrate that it is possible to systematically rationalize decisions taken by an unfair black-box model using the model explanation as well as the outcome explanation approaches with a given fairness metric. Our solution, LaundryML, is based on a regularized rule list enumeration algorithm whose objective is to search for fair rule lists approximating an unfair black-box model. We empirically evaluate our rationalization technique on black-box models trained on real-world datasets and show that one can obtain rule lists with high fidelity to the black-box model while being considerably less unfair at the same time.} }
Endnote
%0 Conference Paper %T Fairwashing: the risk of rationalization %A Ulrich Aivodji %A Hiromi Arai %A Olivier Fortineau %A Sébastien Gambs %A Satoshi Hara %A Alain Tapp %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-aivodji19a %I PMLR %P 161--170 %U https://proceedings.mlr.press/v97/aivodji19a.html %V 97 %X Black-box explanation is the problem of explaining how a machine learning model – whose internal logic is hidden to the auditor and generally complex – produces its outcomes. Current approaches for solving this problem include model explanation, outcome explanation as well as model inspection. While these techniques can be beneficial by providing interpretability, they can be used in a negative manner to perform fairwashing, which we define as promoting the false perception that a machine learning model respects some ethical values. In particular, we demonstrate that it is possible to systematically rationalize decisions taken by an unfair black-box model using the model explanation as well as the outcome explanation approaches with a given fairness metric. Our solution, LaundryML, is based on a regularized rule list enumeration algorithm whose objective is to search for fair rule lists approximating an unfair black-box model. We empirically evaluate our rationalization technique on black-box models trained on real-world datasets and show that one can obtain rule lists with high fidelity to the black-box model while being considerably less unfair at the same time.
APA
Aivodji, U., Arai, H., Fortineau, O., Gambs, S., Hara, S. & Tapp, A.. (2019). Fairwashing: the risk of rationalization. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:161-170 Available from https://proceedings.mlr.press/v97/aivodji19a.html.

Related Material