[edit]
Searching for Fairer Machine Learning Ensembles
Proceedings of the Second International Conference on Automated Machine Learning, PMLR 224:17/1-19, 2023.
Abstract
Bias mitigators can improve algorithmic fairness in machine learning models, but their effect on fairness is often not stable across data splits. A popular approach to train more stable models is ensemble learning, but unfortunately, it is unclear how to combine ensembles with mitigators to best navigate trade-offs between fairness and predictive performance. To that end, we extended the open-source library Lale to enable the modular composition of 8 mitigators, 4 ensembles, and their corresponding hyperparameters, and we empirically explored the space of configurations on 13 datasets. We distilled our insights from this exploration in the form of a guidance diagram that can serve as a starting point for practitioners that we demonstrate is robust and reproducible. We also ran automatic combined algorithm selection and hyperparmeter tuning (or CASH) over ensembles with mitigators. The solutions from the guidance diagram perform similar to those from CASH on many datasets.