Revisiting Fair-PAC Learning and the Axioms of Cardinal Welfare

Cyrus Cousins
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:6422-6442, 2023.

Abstract

Cardinal objectives serve as intuitive targets in fair machine learning by summarizing utility (welfare) or disutility (malfare) $u$ over $g$ groups. Under standard axioms, all welfare and malfare functions are $w$-weighted $p$-power-means, i.e. $M_p(u;w) = \sqrt[p]{\sum_{i=1}^g w_i u_i^p}$, with $p \leq 1$ for welfare, or $p \geq 1$ for malfare. We show the same under weaker axioms, and also identify stronger axioms that naturally restrict $p$. It is known that power-mean malfare functions are Lipschitz continuous, and thus statistically easy to estimate or learn. We show that all power means are locally Holder continuous, i.e., $|M(u; w)-M(u’ ; w)| \leq \lambda \parallel u - u’\parallel^\alpha$ for some $\lambda$, $\alpha$,$\parallel \cdot \parallel$. In particular, $\lambda$ and $1/\alpha$ are bounded except as $p \rightarrow 0$ or $\min_i w_i \rightarrow 0$, and via this analysis we bound the sample complexity of optimizing welfare. This yields a novel concept of fair-PAC learning, wherein welfare functions are only polynomially harder to optimize than malfare functions, except when $p \approx 0$ or $\min_i w_i$ $\approx$ 0, which is exponentially harder.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-cousins23a, title = {Revisiting Fair-PAC Learning and the Axioms of Cardinal Welfare}, author = {Cousins, Cyrus}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {6422--6442}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/cousins23a/cousins23a.pdf}, url = {https://proceedings.mlr.press/v206/cousins23a.html}, abstract = {Cardinal objectives serve as intuitive targets in fair machine learning by summarizing utility (welfare) or disutility (malfare) $u$ over $g$ groups. Under standard axioms, all welfare and malfare functions are $w$-weighted $p$-power-means, i.e. $M_p(u;w) = \sqrt[p]{\sum_{i=1}^g w_i u_i^p}$, with $p \leq 1$ for welfare, or $p \geq 1$ for malfare. We show the same under weaker axioms, and also identify stronger axioms that naturally restrict $p$. It is known that power-mean malfare functions are Lipschitz continuous, and thus statistically easy to estimate or learn. We show that all power means are locally Holder continuous, i.e., $|M(u; w)-M(u’ ; w)| \leq \lambda \parallel u - u’\parallel^\alpha$ for some $\lambda$, $\alpha$,$\parallel \cdot \parallel$. In particular, $\lambda$ and $1/\alpha$ are bounded except as $p \rightarrow 0$ or $\min_i w_i \rightarrow 0$, and via this analysis we bound the sample complexity of optimizing welfare. This yields a novel concept of fair-PAC learning, wherein welfare functions are only polynomially harder to optimize than malfare functions, except when $p \approx 0$ or $\min_i w_i$ $\approx$ 0, which is exponentially harder.} }
Endnote
%0 Conference Paper %T Revisiting Fair-PAC Learning and the Axioms of Cardinal Welfare %A Cyrus Cousins %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-cousins23a %I PMLR %P 6422--6442 %U https://proceedings.mlr.press/v206/cousins23a.html %V 206 %X Cardinal objectives serve as intuitive targets in fair machine learning by summarizing utility (welfare) or disutility (malfare) $u$ over $g$ groups. Under standard axioms, all welfare and malfare functions are $w$-weighted $p$-power-means, i.e. $M_p(u;w) = \sqrt[p]{\sum_{i=1}^g w_i u_i^p}$, with $p \leq 1$ for welfare, or $p \geq 1$ for malfare. We show the same under weaker axioms, and also identify stronger axioms that naturally restrict $p$. It is known that power-mean malfare functions are Lipschitz continuous, and thus statistically easy to estimate or learn. We show that all power means are locally Holder continuous, i.e., $|M(u; w)-M(u’ ; w)| \leq \lambda \parallel u - u’\parallel^\alpha$ for some $\lambda$, $\alpha$,$\parallel \cdot \parallel$. In particular, $\lambda$ and $1/\alpha$ are bounded except as $p \rightarrow 0$ or $\min_i w_i \rightarrow 0$, and via this analysis we bound the sample complexity of optimizing welfare. This yields a novel concept of fair-PAC learning, wherein welfare functions are only polynomially harder to optimize than malfare functions, except when $p \approx 0$ or $\min_i w_i$ $\approx$ 0, which is exponentially harder.
APA
Cousins, C.. (2023). Revisiting Fair-PAC Learning and the Axioms of Cardinal Welfare. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:6422-6442 Available from https://proceedings.mlr.press/v206/cousins23a.html.

Related Material