[edit]
Revisiting Fair-PAC Learning and the Axioms of Cardinal Welfare
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:6422-6442, 2023.
Abstract
Cardinal objectives serve as intuitive targets in fair machine learning by summarizing utility (welfare) or disutility (malfare) $u$ over $g$ groups. Under standard axioms, all welfare and malfare functions are $w$-weighted $p$-power-means, i.e. $M_p(u;w) = \sqrt[p]{\sum_{i=1}^g w_i u_i^p}$, with $p \leq 1$ for welfare, or $p \geq 1$ for malfare. We show the same under weaker axioms, and also identify stronger axioms that naturally restrict $p$. It is known that power-mean malfare functions are Lipschitz continuous, and thus statistically easy to estimate or learn. We show that all power means are locally Holder continuous, i.e., $|M(u; w)-M(u’ ; w)| \leq \lambda \parallel u - u’\parallel^\alpha$ for some $\lambda$, $\alpha$,$\parallel \cdot \parallel$. In particular, $\lambda$ and $1/\alpha$ are bounded except as $p \rightarrow 0$ or $\min_i w_i \rightarrow 0$, and via this analysis we bound the sample complexity of optimizing welfare. This yields a novel concept of fair-PAC learning, wherein welfare functions are only polynomially harder to optimize than malfare functions, except when $p \approx 0$ or $\min_i w_i$ $\approx$ 0, which is exponentially harder.