Learn from Your Neighbor: Learning Multi-modal Mappings from Sparse Annotations

Ashwin Kalyan, Stefan Lee, Anitha Kannan, Dhruv Batra
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:2449-2458, 2018.

Abstract

Many structured prediction problems (particularly in vision and language domains) are ambiguous, with multiple outputs being ‘correct’ for an input {–} e.g. there are many ways of describing an image, multiple ways of translating a sentence; however, exhaustively annotating the applicability of all possible outputs is intractable due to exponentially large output spaces (e.g. all English sentences). In practice, these problems are cast as multi-class prediction, with the likelihood of only a sparse set of annotations being maximized {–} unfortunately penalizing for placing beliefs on plausible but unannotated outputs. We make and test the following hypothesis {–} for a given input, the annotations of its neighbors may serve as an additional supervisory signal. Specifically, we propose an objective that transfers supervision from neighboring examples. We first study the properties of our developed method in a controlled toy setup before reporting results on multi-label classification and two image-grounded sequence modeling tasks {–} captioning and question generation. We evaluate using standard task-specific metrics and measures of output diversity, finding consistent improvements over standard maximum likelihood training and other baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-kalyan18a, title = {Learn from Your Neighbor: Learning Multi-modal Mappings from Sparse Annotations}, author = {Kalyan, Ashwin and Lee, Stefan and Kannan, Anitha and Batra, Dhruv}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {2449--2458}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/kalyan18a/kalyan18a.pdf}, url = {https://proceedings.mlr.press/v80/kalyan18a.html}, abstract = {Many structured prediction problems (particularly in vision and language domains) are ambiguous, with multiple outputs being ‘correct’ for an input {–} e.g. there are many ways of describing an image, multiple ways of translating a sentence; however, exhaustively annotating the applicability of all possible outputs is intractable due to exponentially large output spaces (e.g. all English sentences). In practice, these problems are cast as multi-class prediction, with the likelihood of only a sparse set of annotations being maximized {–} unfortunately penalizing for placing beliefs on plausible but unannotated outputs. We make and test the following hypothesis {–} for a given input, the annotations of its neighbors may serve as an additional supervisory signal. Specifically, we propose an objective that transfers supervision from neighboring examples. We first study the properties of our developed method in a controlled toy setup before reporting results on multi-label classification and two image-grounded sequence modeling tasks {–} captioning and question generation. We evaluate using standard task-specific metrics and measures of output diversity, finding consistent improvements over standard maximum likelihood training and other baselines.} }
Endnote
%0 Conference Paper %T Learn from Your Neighbor: Learning Multi-modal Mappings from Sparse Annotations %A Ashwin Kalyan %A Stefan Lee %A Anitha Kannan %A Dhruv Batra %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-kalyan18a %I PMLR %P 2449--2458 %U https://proceedings.mlr.press/v80/kalyan18a.html %V 80 %X Many structured prediction problems (particularly in vision and language domains) are ambiguous, with multiple outputs being ‘correct’ for an input {–} e.g. there are many ways of describing an image, multiple ways of translating a sentence; however, exhaustively annotating the applicability of all possible outputs is intractable due to exponentially large output spaces (e.g. all English sentences). In practice, these problems are cast as multi-class prediction, with the likelihood of only a sparse set of annotations being maximized {–} unfortunately penalizing for placing beliefs on plausible but unannotated outputs. We make and test the following hypothesis {–} for a given input, the annotations of its neighbors may serve as an additional supervisory signal. Specifically, we propose an objective that transfers supervision from neighboring examples. We first study the properties of our developed method in a controlled toy setup before reporting results on multi-label classification and two image-grounded sequence modeling tasks {–} captioning and question generation. We evaluate using standard task-specific metrics and measures of output diversity, finding consistent improvements over standard maximum likelihood training and other baselines.
APA
Kalyan, A., Lee, S., Kannan, A. & Batra, D.. (2018). Learn from Your Neighbor: Learning Multi-modal Mappings from Sparse Annotations. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:2449-2458 Available from https://proceedings.mlr.press/v80/kalyan18a.html.

Related Material