[edit]
Federated Rényi Fair Inference in Federated Heterogeneous System
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:2804-2843, 2025.
Abstract
Federated Learning (FL) is a prominent distributed learning approach that addresses two major challenges: statistical heterogeneity (i.e., non-identically distributed data) and system heterogeneity (i.e., variability in the communication and computation on each client). As FL is commonly applied in sectors such as commercial and financial, group disparities may emerge and cause harm. However, current fairness algorithms assume homogeneous data, which does not align with the FL context. The main challenge is estimating global fairness measures (e.g., Rényi or Pearson correlation) in an asynchronous, heterogeneous system. To address this, we propose the FedRényi algorithm, which regularizes fairness by Rényi correlation. For statistical heterogeneity, FedRényi aggregates local fairness statistics to estimate the global Rényi correlation with an estimation error bound of $O(1/\sqrt{n})$, where $n$ is the total number of data samples. This theoretical result improves significantly over the prior result $O(1/\sqrt{K})$ with $K$ clients. We further prove that FedRényi converges at the same rate as in the homogeneous setting. For system heterogeneity, FedRényi approximates missing client updates through weighted averaging over a nearest neighbor region, ensuring a non-expansive approximation error under non-convex conditions. Extensive experiments demonstrate that FedRényi achieves a promising fairness-accuracy trade-off, with at least 2\% improvement over baselines.