Probably Approximately MetricFair Learning
[edit]
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:56805688, 2018.
Abstract
The seminal work of Dwork et al. [ITCS 2012] introduced a metricbased notion of individual fairness: given a taskspecific similarity metric, their notion required that every pair of similar individuals should be treated similarly. In the context of machine learning, however, individual fairness does not generalize from a training set to the underlying population. We show that this can lead to computational intractability even for simple fairlearning tasks. With this motivation in mind, we introduce and study a relaxed notion of approximate metricfairness: for a random pair of individuals sampled from the population, with all but a small probability of error, if they are similar then they should be treated similarly. We formalize the goal of achieving approximate metricfairness simultaneously with bestpossible accuracy as Probably Approximately Correct and Fair (PACF) Learning. We show that approximate metricfairness does generalize, and leverage these generalization guarantees to construct polynomialtime PACF learning algorithms for the classes of linear and logistic predictors.
Related Material


