Optimizing Black-box Metrics with Adaptive Surrogates

Qijia Jiang, Olaoluwa Adigun, Harikrishna Narasimhan, Mahdi Milani Fard, Maya Gupta
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:4784-4793, 2020.

Abstract

We address the problem of training models with black-box and hard-to-optimize metrics by expressing the metric as a monotonic function of a small number of easy-to-optimize surrogates. We pose the training problem as an optimization over a relaxed surrogate space, which we solve by estimating local gradients for the metric and performing inexact convex projections. We analyze gradient estimates based on finite differences and local linear interpolations, and show convergence of our approach under smoothness assumptions with respect to the surrogates. Experimental results on classification and ranking problems verify the proposal performs on par with methods that know the mathematical formulation, and adds notable value when the form of the metric is unknown.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-jiang20a, title = {Optimizing Black-box Metrics with Adaptive Surrogates}, author = {Jiang, Qijia and Adigun, Olaoluwa and Narasimhan, Harikrishna and Fard, Mahdi Milani and Gupta, Maya}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {4784--4793}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/jiang20a/jiang20a.pdf}, url = {https://proceedings.mlr.press/v119/jiang20a.html}, abstract = {We address the problem of training models with black-box and hard-to-optimize metrics by expressing the metric as a monotonic function of a small number of easy-to-optimize surrogates. We pose the training problem as an optimization over a relaxed surrogate space, which we solve by estimating local gradients for the metric and performing inexact convex projections. We analyze gradient estimates based on finite differences and local linear interpolations, and show convergence of our approach under smoothness assumptions with respect to the surrogates. Experimental results on classification and ranking problems verify the proposal performs on par with methods that know the mathematical formulation, and adds notable value when the form of the metric is unknown.} }
Endnote
%0 Conference Paper %T Optimizing Black-box Metrics with Adaptive Surrogates %A Qijia Jiang %A Olaoluwa Adigun %A Harikrishna Narasimhan %A Mahdi Milani Fard %A Maya Gupta %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-jiang20a %I PMLR %P 4784--4793 %U https://proceedings.mlr.press/v119/jiang20a.html %V 119 %X We address the problem of training models with black-box and hard-to-optimize metrics by expressing the metric as a monotonic function of a small number of easy-to-optimize surrogates. We pose the training problem as an optimization over a relaxed surrogate space, which we solve by estimating local gradients for the metric and performing inexact convex projections. We analyze gradient estimates based on finite differences and local linear interpolations, and show convergence of our approach under smoothness assumptions with respect to the surrogates. Experimental results on classification and ranking problems verify the proposal performs on par with methods that know the mathematical formulation, and adds notable value when the form of the metric is unknown.
APA
Jiang, Q., Adigun, O., Narasimhan, H., Fard, M.M. & Gupta, M.. (2020). Optimizing Black-box Metrics with Adaptive Surrogates. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:4784-4793 Available from https://proceedings.mlr.press/v119/jiang20a.html.

Related Material