Optimizing Black-box Metrics with Iterative Example Weighting

Gaurush Hiranandani, Jatin Mathur, Harikrishna Narasimhan, Mahdi Milani Fard, Sanmi Koyejo
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:4239-4249, 2021.

Abstract

We consider learning to optimize a classification metric defined by a black-box function of the confusion matrix. Such black-box learning settings are ubiquitous, for example, when the learner only has query access to the metric of interest, or in noisy-label and domain adaptation applications where the learner must evaluate the metric via performance evaluation using a small validation sample. Our approach is to adaptively learn example weights on the training dataset such that the resulting weighted objective best approximates the metric on the validation sample. We show how to model and estimate the example weights and use them to iteratively post-shift a pre-trained class probability estimator to construct a classifier. We also analyze the resulting procedure’s statistical properties. Experiments on various label noise, domain shift, and fair classification setups confirm that our proposal compares favorably to the state-of-the-art baselines for each application.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-hiranandani21a, title = {Optimizing Black-box Metrics with Iterative Example Weighting}, author = {Hiranandani, Gaurush and Mathur, Jatin and Narasimhan, Harikrishna and Fard, Mahdi Milani and Koyejo, Sanmi}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {4239--4249}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/hiranandani21a/hiranandani21a.pdf}, url = {https://proceedings.mlr.press/v139/hiranandani21a.html}, abstract = {We consider learning to optimize a classification metric defined by a black-box function of the confusion matrix. Such black-box learning settings are ubiquitous, for example, when the learner only has query access to the metric of interest, or in noisy-label and domain adaptation applications where the learner must evaluate the metric via performance evaluation using a small validation sample. Our approach is to adaptively learn example weights on the training dataset such that the resulting weighted objective best approximates the metric on the validation sample. We show how to model and estimate the example weights and use them to iteratively post-shift a pre-trained class probability estimator to construct a classifier. We also analyze the resulting procedure’s statistical properties. Experiments on various label noise, domain shift, and fair classification setups confirm that our proposal compares favorably to the state-of-the-art baselines for each application.} }
Endnote
%0 Conference Paper %T Optimizing Black-box Metrics with Iterative Example Weighting %A Gaurush Hiranandani %A Jatin Mathur %A Harikrishna Narasimhan %A Mahdi Milani Fard %A Sanmi Koyejo %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-hiranandani21a %I PMLR %P 4239--4249 %U https://proceedings.mlr.press/v139/hiranandani21a.html %V 139 %X We consider learning to optimize a classification metric defined by a black-box function of the confusion matrix. Such black-box learning settings are ubiquitous, for example, when the learner only has query access to the metric of interest, or in noisy-label and domain adaptation applications where the learner must evaluate the metric via performance evaluation using a small validation sample. Our approach is to adaptively learn example weights on the training dataset such that the resulting weighted objective best approximates the metric on the validation sample. We show how to model and estimate the example weights and use them to iteratively post-shift a pre-trained class probability estimator to construct a classifier. We also analyze the resulting procedure’s statistical properties. Experiments on various label noise, domain shift, and fair classification setups confirm that our proposal compares favorably to the state-of-the-art baselines for each application.
APA
Hiranandani, G., Mathur, J., Narasimhan, H., Fard, M.M. & Koyejo, S.. (2021). Optimizing Black-box Metrics with Iterative Example Weighting. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:4239-4249 Available from https://proceedings.mlr.press/v139/hiranandani21a.html.

Related Material