ILLUME: Rationalizing Vision-Language Models through Human Interactions

Manuel Brack, Patrick Schramowski, Björn Deiseroth, Kristian Kersting
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:3021-3037, 2023.

Abstract

Bootstrapping from pre-trained language models has been proven to be an efficient approach for building vision-language models (VLM) for tasks such as image captioning or visual question answering. However, outputs of these models rarely align with user’s rationales for specific answers. In order to improve this alignment and reinforce commonsense reasons, we propose a tuning paradigm based on human interactions with machine-generated data. Our ILLUME executes the following loop: Given an image-question-answer prompt, the VLM samples multiple candidate rationales, and a human critic provides feedback via preference selection, used for fine-tuning. This loop increases the training data and gradually carves out the VLM’s rationalization capabilities that are aligned with human intent. Our exhaustive experiments demonstrate that ILLUME is competitive with standard supervised finetuning while using significantly fewer training data and only requiring minimal feedback.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-brack23a, title = {{ILLUME}: Rationalizing Vision-Language Models through Human Interactions}, author = {Brack, Manuel and Schramowski, Patrick and Deiseroth, Bj\"{o}rn and Kersting, Kristian}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {3021--3037}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/brack23a/brack23a.pdf}, url = {https://proceedings.mlr.press/v202/brack23a.html}, abstract = {Bootstrapping from pre-trained language models has been proven to be an efficient approach for building vision-language models (VLM) for tasks such as image captioning or visual question answering. However, outputs of these models rarely align with user’s rationales for specific answers. In order to improve this alignment and reinforce commonsense reasons, we propose a tuning paradigm based on human interactions with machine-generated data. Our ILLUME executes the following loop: Given an image-question-answer prompt, the VLM samples multiple candidate rationales, and a human critic provides feedback via preference selection, used for fine-tuning. This loop increases the training data and gradually carves out the VLM’s rationalization capabilities that are aligned with human intent. Our exhaustive experiments demonstrate that ILLUME is competitive with standard supervised finetuning while using significantly fewer training data and only requiring minimal feedback.} }
Endnote
%0 Conference Paper %T ILLUME: Rationalizing Vision-Language Models through Human Interactions %A Manuel Brack %A Patrick Schramowski %A Björn Deiseroth %A Kristian Kersting %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-brack23a %I PMLR %P 3021--3037 %U https://proceedings.mlr.press/v202/brack23a.html %V 202 %X Bootstrapping from pre-trained language models has been proven to be an efficient approach for building vision-language models (VLM) for tasks such as image captioning or visual question answering. However, outputs of these models rarely align with user’s rationales for specific answers. In order to improve this alignment and reinforce commonsense reasons, we propose a tuning paradigm based on human interactions with machine-generated data. Our ILLUME executes the following loop: Given an image-question-answer prompt, the VLM samples multiple candidate rationales, and a human critic provides feedback via preference selection, used for fine-tuning. This loop increases the training data and gradually carves out the VLM’s rationalization capabilities that are aligned with human intent. Our exhaustive experiments demonstrate that ILLUME is competitive with standard supervised finetuning while using significantly fewer training data and only requiring minimal feedback.
APA
Brack, M., Schramowski, P., Deiseroth, B. & Kersting, K.. (2023). ILLUME: Rationalizing Vision-Language Models through Human Interactions. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:3021-3037 Available from https://proceedings.mlr.press/v202/brack23a.html.

Related Material