Learning Randomly Perturbed Structured Predictors for Direct Loss Minimization

Hedda Cohen Indelman, Tamir Hazan
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:4585-4595, 2021.

Abstract

Direct loss minimization is a popular approach for learning predictors over structured label spaces. This approach is computationally appealing as it replaces integration with optimization and allows to propagate gradients in a deep net using loss-perturbed prediction. Recently, this technique was extended to generative models, by introducing a randomized predictor that samples a structure from a randomly perturbed score function. In this work, we interpolate between these techniques by learning the variance of randomized structured predictors as well as their mean, in order to balance between the learned score function and the randomized noise. We demonstrate empirically the effectiveness of learning this balance in structured discrete spaces.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-indelman21a, title = {Learning Randomly Perturbed Structured Predictors for Direct Loss Minimization}, author = {Indelman, Hedda Cohen and Hazan, Tamir}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {4585--4595}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/indelman21a/indelman21a.pdf}, url = {https://proceedings.mlr.press/v139/indelman21a.html}, abstract = {Direct loss minimization is a popular approach for learning predictors over structured label spaces. This approach is computationally appealing as it replaces integration with optimization and allows to propagate gradients in a deep net using loss-perturbed prediction. Recently, this technique was extended to generative models, by introducing a randomized predictor that samples a structure from a randomly perturbed score function. In this work, we interpolate between these techniques by learning the variance of randomized structured predictors as well as their mean, in order to balance between the learned score function and the randomized noise. We demonstrate empirically the effectiveness of learning this balance in structured discrete spaces.} }
Endnote
%0 Conference Paper %T Learning Randomly Perturbed Structured Predictors for Direct Loss Minimization %A Hedda Cohen Indelman %A Tamir Hazan %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-indelman21a %I PMLR %P 4585--4595 %U https://proceedings.mlr.press/v139/indelman21a.html %V 139 %X Direct loss minimization is a popular approach for learning predictors over structured label spaces. This approach is computationally appealing as it replaces integration with optimization and allows to propagate gradients in a deep net using loss-perturbed prediction. Recently, this technique was extended to generative models, by introducing a randomized predictor that samples a structure from a randomly perturbed score function. In this work, we interpolate between these techniques by learning the variance of randomized structured predictors as well as their mean, in order to balance between the learned score function and the randomized noise. We demonstrate empirically the effectiveness of learning this balance in structured discrete spaces.
APA
Indelman, H.C. & Hazan, T.. (2021). Learning Randomly Perturbed Structured Predictors for Direct Loss Minimization. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:4585-4595 Available from https://proceedings.mlr.press/v139/indelman21a.html.

Related Material