[edit]
Revisiting Fair-PAC Learning and the Axioms of Cardinal Welfare
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:6422-6442, 2023.
Abstract
Cardinal objectives serve as intuitive targets in fair machine learning by summarizing utility (welfare) or disutility (malfare) u over g groups. Under standard axioms, all welfare and malfare functions are w-weighted p-power-means, i.e. Mp(u;w)=p√∑gi=1wiupi, with p≤1 for welfare, or p≥1 for malfare. We show the same under weaker axioms, and also identify stronger axioms that naturally restrict p. It is known that power-mean malfare functions are Lipschitz continuous, and thus statistically easy to estimate or learn. We show that all power means are locally Holder continuous, i.e., |M(u;w)−M(u′;w)|≤λ∥u−u′∥α for some λ, α,∥⋅∥. In particular, λ and 1/α are bounded except as p→0 or min, and via this analysis we bound the sample complexity of optimizing welfare. This yields a novel concept of fair-PAC learning, wherein welfare functions are only polynomially harder to optimize than malfare functions, except when p \approx 0 or \min_i w_i \approx 0, which is exponentially harder.