Unleashing Linear Optimizers for Group-Fair Learning and Optimization

Daniel Alabi, Nicole Immorlica, Adam Kalai
Proceedings of the 31st Conference On Learning Theory, PMLR 75:2043-2066, 2018.

Abstract

Most systems and learning algorithms optimize average performance or average loss – one reason being computational complexity. However, many objectives of practical interest are more complex than simply average loss. This arises, for example, when balancing performance or loss with fairness across people. We prove that, from a computational perspective, optimizing arbitrary objectives that take into account performance over a small number of groups is not significantly harder to optimize than average performance. Our main result is a polynomial-time reduction that uses a linear optimizer to optimize an arbitrary (Lipschitz continuous) function of performance over a (constant) number of possibly-overlapping groups. This includes fairness objectives over small numbers of groups, and we further point out that other existing notions of fairness such as individual fairness can be cast as convex optimization and hence more standard convex techniques can be used. Beyond learning, our approach applies to multi-objective optimization, more generally.

Cite this Paper


BibTeX
@InProceedings{pmlr-v75-alabi18a, title = {Unleashing Linear Optimizers for Group-Fair Learning and Optimization}, author = {Alabi, Daniel and Immorlica, Nicole and Kalai, Adam}, booktitle = {Proceedings of the 31st Conference On Learning Theory}, pages = {2043--2066}, year = {2018}, editor = {Bubeck, Sébastien and Perchet, Vianney and Rigollet, Philippe}, volume = {75}, series = {Proceedings of Machine Learning Research}, month = {06--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v75/alabi18a/alabi18a.pdf}, url = {https://proceedings.mlr.press/v75/alabi18a.html}, abstract = {Most systems and learning algorithms optimize average performance or average loss – one reason being computational complexity. However, many objectives of practical interest are more complex than simply average loss. This arises, for example, when balancing performance or loss with fairness across people. We prove that, from a computational perspective, optimizing arbitrary objectives that take into account performance over a small number of groups is not significantly harder to optimize than average performance. Our main result is a polynomial-time reduction that uses a linear optimizer to optimize an arbitrary (Lipschitz continuous) function of performance over a (constant) number of possibly-overlapping groups. This includes fairness objectives over small numbers of groups, and we further point out that other existing notions of fairness such as individual fairness can be cast as convex optimization and hence more standard convex techniques can be used. Beyond learning, our approach applies to multi-objective optimization, more generally.} }
Endnote
%0 Conference Paper %T Unleashing Linear Optimizers for Group-Fair Learning and Optimization %A Daniel Alabi %A Nicole Immorlica %A Adam Kalai %B Proceedings of the 31st Conference On Learning Theory %C Proceedings of Machine Learning Research %D 2018 %E Sébastien Bubeck %E Vianney Perchet %E Philippe Rigollet %F pmlr-v75-alabi18a %I PMLR %P 2043--2066 %U https://proceedings.mlr.press/v75/alabi18a.html %V 75 %X Most systems and learning algorithms optimize average performance or average loss – one reason being computational complexity. However, many objectives of practical interest are more complex than simply average loss. This arises, for example, when balancing performance or loss with fairness across people. We prove that, from a computational perspective, optimizing arbitrary objectives that take into account performance over a small number of groups is not significantly harder to optimize than average performance. Our main result is a polynomial-time reduction that uses a linear optimizer to optimize an arbitrary (Lipschitz continuous) function of performance over a (constant) number of possibly-overlapping groups. This includes fairness objectives over small numbers of groups, and we further point out that other existing notions of fairness such as individual fairness can be cast as convex optimization and hence more standard convex techniques can be used. Beyond learning, our approach applies to multi-objective optimization, more generally.
APA
Alabi, D., Immorlica, N. & Kalai, A.. (2018). Unleashing Linear Optimizers for Group-Fair Learning and Optimization. Proceedings of the 31st Conference On Learning Theory, in Proceedings of Machine Learning Research 75:2043-2066 Available from https://proceedings.mlr.press/v75/alabi18a.html.

Related Material