AP-Perf: Incorporating Generic Performance Metrics in Differentiable Learning

Rizal Fathony, Zico Kolter
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:4130-4140, 2020.

Abstract

We propose a method that enables practitioners to conveniently incorporate custom non-decomposable performance metrics into differentiable learning pipelines, notably those based upon neural network architectures. Our approach is based on the recently developed adversarial prediction framework, a distributionally robust approach that optimizes a metric in the worst case given the statistical summary of the empirical distribution. We formulate a marginal distribution technique to reduce the complexity of optimizing the adversarial prediction formulation over a vast range of non-decomposable metrics. We demonstrate how easy it is to write and incorporate complex custom metrics using our provided tool. Finally, we show the effectiveness of our approach various classification tasks on tabular datasets from the UCI repository and benchmark datasets, as well as image classification tasks. The code for our proposed method is available at https://github.com/rizalzaf/AdversarialPrediction.jl.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-fathony20a, title = {AP-Perf: Incorporating Generic Performance Metrics in Differentiable Learning}, author = {Fathony, Rizal and Kolter, Zico}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {4130--4140}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/fathony20a/fathony20a.pdf}, url = {https://proceedings.mlr.press/v108/fathony20a.html}, abstract = {We propose a method that enables practitioners to conveniently incorporate custom non-decomposable performance metrics into differentiable learning pipelines, notably those based upon neural network architectures. Our approach is based on the recently developed adversarial prediction framework, a distributionally robust approach that optimizes a metric in the worst case given the statistical summary of the empirical distribution. We formulate a marginal distribution technique to reduce the complexity of optimizing the adversarial prediction formulation over a vast range of non-decomposable metrics. We demonstrate how easy it is to write and incorporate complex custom metrics using our provided tool. Finally, we show the effectiveness of our approach various classification tasks on tabular datasets from the UCI repository and benchmark datasets, as well as image classification tasks. The code for our proposed method is available at https://github.com/rizalzaf/AdversarialPrediction.jl.} }
Endnote
%0 Conference Paper %T AP-Perf: Incorporating Generic Performance Metrics in Differentiable Learning %A Rizal Fathony %A Zico Kolter %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-fathony20a %I PMLR %P 4130--4140 %U https://proceedings.mlr.press/v108/fathony20a.html %V 108 %X We propose a method that enables practitioners to conveniently incorporate custom non-decomposable performance metrics into differentiable learning pipelines, notably those based upon neural network architectures. Our approach is based on the recently developed adversarial prediction framework, a distributionally robust approach that optimizes a metric in the worst case given the statistical summary of the empirical distribution. We formulate a marginal distribution technique to reduce the complexity of optimizing the adversarial prediction formulation over a vast range of non-decomposable metrics. We demonstrate how easy it is to write and incorporate complex custom metrics using our provided tool. Finally, we show the effectiveness of our approach various classification tasks on tabular datasets from the UCI repository and benchmark datasets, as well as image classification tasks. The code for our proposed method is available at https://github.com/rizalzaf/AdversarialPrediction.jl.
APA
Fathony, R. & Kolter, Z.. (2020). AP-Perf: Incorporating Generic Performance Metrics in Differentiable Learning. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:4130-4140 Available from https://proceedings.mlr.press/v108/fathony20a.html.

Related Material