FR-Train: A Mutual Information-Based Approach to Fair and Robust Training

Yuji Roh, Kangwook Lee, Steven Whang, Changho Suh
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:8147-8157, 2020.

Abstract

Trustworthy AI is a critical issue in machine learning where, in addition to training a model that is accurate, one must consider both fair and robust training in the presence of data bias and poisoning. However, the existing model fairness techniques mistakenly view poisoned data as an additional bias to be fixed, resulting in severe performance degradation. To address this problem, we propose FR-Train, which holistically performs fair and robust model training. We provide a mutual information-based interpretation of an existing adversarial training-based fairness-only method, and apply this idea to architect an additional discriminator that can identify poisoned data using a clean validation set and reduce its influence. In our experiments, FR-Train shows almost no decrease in fairness and accuracy in the presence of data poisoning by both mitigating the bias and defending against poisoning. We also demonstrate how to construct clean validation sets using crowdsourcing, and release new benchmark datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-roh20a, title = {{FR}-Train: A Mutual Information-Based Approach to Fair and Robust Training}, author = {Roh, Yuji and Lee, Kangwook and Whang, Steven and Suh, Changho}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {8147--8157}, year = {2020}, editor = {Hal Daumé III and Aarti Singh}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/roh20a/roh20a.pdf}, url = { http://proceedings.mlr.press/v119/roh20a.html }, abstract = {Trustworthy AI is a critical issue in machine learning where, in addition to training a model that is accurate, one must consider both fair and robust training in the presence of data bias and poisoning. However, the existing model fairness techniques mistakenly view poisoned data as an additional bias to be fixed, resulting in severe performance degradation. To address this problem, we propose FR-Train, which holistically performs fair and robust model training. We provide a mutual information-based interpretation of an existing adversarial training-based fairness-only method, and apply this idea to architect an additional discriminator that can identify poisoned data using a clean validation set and reduce its influence. In our experiments, FR-Train shows almost no decrease in fairness and accuracy in the presence of data poisoning by both mitigating the bias and defending against poisoning. We also demonstrate how to construct clean validation sets using crowdsourcing, and release new benchmark datasets.} }
Endnote
%0 Conference Paper %T FR-Train: A Mutual Information-Based Approach to Fair and Robust Training %A Yuji Roh %A Kangwook Lee %A Steven Whang %A Changho Suh %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-roh20a %I PMLR %P 8147--8157 %U http://proceedings.mlr.press/v119/roh20a.html %V 119 %X Trustworthy AI is a critical issue in machine learning where, in addition to training a model that is accurate, one must consider both fair and robust training in the presence of data bias and poisoning. However, the existing model fairness techniques mistakenly view poisoned data as an additional bias to be fixed, resulting in severe performance degradation. To address this problem, we propose FR-Train, which holistically performs fair and robust model training. We provide a mutual information-based interpretation of an existing adversarial training-based fairness-only method, and apply this idea to architect an additional discriminator that can identify poisoned data using a clean validation set and reduce its influence. In our experiments, FR-Train shows almost no decrease in fairness and accuracy in the presence of data poisoning by both mitigating the bias and defending against poisoning. We also demonstrate how to construct clean validation sets using crowdsourcing, and release new benchmark datasets.
APA
Roh, Y., Lee, K., Whang, S. & Suh, C.. (2020). FR-Train: A Mutual Information-Based Approach to Fair and Robust Training. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:8147-8157 Available from http://proceedings.mlr.press/v119/roh20a.html .

Related Material