Reasoning Limitations of Multimodal Large Language Models. A case study of Bongard Problems

Mikołaj Małkiński, Szymon Pawlonka, Jacek Mańdziuk
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:42795-42833, 2025.

Abstract

Abstract visual reasoning (AVR) involves discovering shared concepts across images through analogy, akin to solving IQ test problems. Bongard Problems (BPs) remain a key challenge in AVR, requiring both visual reasoning and verbal description. We investigate whether multimodal large language models (MLLMs) can solve BPs by formulating a set of diverse MLLM-suited solution strategies and testing $4$ proprietary and $4$ open-access models on $3$ BP datasets featuring synthetic (classic BPs) and real-world (Bongard HOI and Bongard-OpenWorld) images. Despite some successes on real-world datasets, MLLMs struggle with synthetic BPs. To explore this gap, we introduce Bongard-RWR, a dataset representing synthetic BP concepts using real-world images. Our findings suggest that weak MLLM performance on classical BPs is not due to the domain specificity, but rather comes from their general AVR limitations. Code and dataset are available at: https://github.com/pavonism/bongard-rwr

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-malkinski25a, title = {Reasoning Limitations of Multimodal Large Language Models. A case study of Bongard Problems}, author = {Ma{\l}ki\'{n}ski, Miko{\l}aj and Pawlonka, Szymon and Ma\'{n}dziuk, Jacek}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {42795--42833}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/malkinski25a/malkinski25a.pdf}, url = {https://proceedings.mlr.press/v267/malkinski25a.html}, abstract = {Abstract visual reasoning (AVR) involves discovering shared concepts across images through analogy, akin to solving IQ test problems. Bongard Problems (BPs) remain a key challenge in AVR, requiring both visual reasoning and verbal description. We investigate whether multimodal large language models (MLLMs) can solve BPs by formulating a set of diverse MLLM-suited solution strategies and testing $4$ proprietary and $4$ open-access models on $3$ BP datasets featuring synthetic (classic BPs) and real-world (Bongard HOI and Bongard-OpenWorld) images. Despite some successes on real-world datasets, MLLMs struggle with synthetic BPs. To explore this gap, we introduce Bongard-RWR, a dataset representing synthetic BP concepts using real-world images. Our findings suggest that weak MLLM performance on classical BPs is not due to the domain specificity, but rather comes from their general AVR limitations. Code and dataset are available at: https://github.com/pavonism/bongard-rwr} }
Endnote
%0 Conference Paper %T Reasoning Limitations of Multimodal Large Language Models. A case study of Bongard Problems %A Mikołaj Małkiński %A Szymon Pawlonka %A Jacek Mańdziuk %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-malkinski25a %I PMLR %P 42795--42833 %U https://proceedings.mlr.press/v267/malkinski25a.html %V 267 %X Abstract visual reasoning (AVR) involves discovering shared concepts across images through analogy, akin to solving IQ test problems. Bongard Problems (BPs) remain a key challenge in AVR, requiring both visual reasoning and verbal description. We investigate whether multimodal large language models (MLLMs) can solve BPs by formulating a set of diverse MLLM-suited solution strategies and testing $4$ proprietary and $4$ open-access models on $3$ BP datasets featuring synthetic (classic BPs) and real-world (Bongard HOI and Bongard-OpenWorld) images. Despite some successes on real-world datasets, MLLMs struggle with synthetic BPs. To explore this gap, we introduce Bongard-RWR, a dataset representing synthetic BP concepts using real-world images. Our findings suggest that weak MLLM performance on classical BPs is not due to the domain specificity, but rather comes from their general AVR limitations. Code and dataset are available at: https://github.com/pavonism/bongard-rwr
APA
Małkiński, M., Pawlonka, S. & Mańdziuk, J.. (2025). Reasoning Limitations of Multimodal Large Language Models. A case study of Bongard Problems. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:42795-42833 Available from https://proceedings.mlr.press/v267/malkinski25a.html.

Related Material