Tuning Computer Vision Models With Task Rewards

André Susano Pinto, Alexander Kolesnikov, Yuge Shi, Lucas Beyer, Xiaohua Zhai
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:33229-33239, 2023.

Abstract

Misalignment between model predictions and intended usage can be detrimental for the deployment of computer vision models. The issue is exacerbated when the task involves complex structured outputs, as it becomes harder to design procedures which address this misalignment. In natural language processing, this is often addressed using reinforcement learning techniques that align models with a task reward. We adopt this approach and show its surprising effectiveness to improve generic models pretrained to imitate example outputs across multiple computer vision tasks, such as object detection, panoptic segmentation, colorization and image captioning. We believe this approach has the potential to be widely useful for better aligning models with a diverse range of computer vision tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-susano-pinto23a, title = {Tuning Computer Vision Models With Task Rewards}, author = {Susano Pinto, Andr\'{e} and Kolesnikov, Alexander and Shi, Yuge and Beyer, Lucas and Zhai, Xiaohua}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {33229--33239}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/susano-pinto23a/susano-pinto23a.pdf}, url = {https://proceedings.mlr.press/v202/susano-pinto23a.html}, abstract = {Misalignment between model predictions and intended usage can be detrimental for the deployment of computer vision models. The issue is exacerbated when the task involves complex structured outputs, as it becomes harder to design procedures which address this misalignment. In natural language processing, this is often addressed using reinforcement learning techniques that align models with a task reward. We adopt this approach and show its surprising effectiveness to improve generic models pretrained to imitate example outputs across multiple computer vision tasks, such as object detection, panoptic segmentation, colorization and image captioning. We believe this approach has the potential to be widely useful for better aligning models with a diverse range of computer vision tasks.} }
Endnote
%0 Conference Paper %T Tuning Computer Vision Models With Task Rewards %A André Susano Pinto %A Alexander Kolesnikov %A Yuge Shi %A Lucas Beyer %A Xiaohua Zhai %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-susano-pinto23a %I PMLR %P 33229--33239 %U https://proceedings.mlr.press/v202/susano-pinto23a.html %V 202 %X Misalignment between model predictions and intended usage can be detrimental for the deployment of computer vision models. The issue is exacerbated when the task involves complex structured outputs, as it becomes harder to design procedures which address this misalignment. In natural language processing, this is often addressed using reinforcement learning techniques that align models with a task reward. We adopt this approach and show its surprising effectiveness to improve generic models pretrained to imitate example outputs across multiple computer vision tasks, such as object detection, panoptic segmentation, colorization and image captioning. We believe this approach has the potential to be widely useful for better aligning models with a diverse range of computer vision tasks.
APA
Susano Pinto, A., Kolesnikov, A., Shi, Y., Beyer, L. & Zhai, X.. (2023). Tuning Computer Vision Models With Task Rewards. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:33229-33239 Available from https://proceedings.mlr.press/v202/susano-pinto23a.html.

Related Material