Can Large Reasoning Models do Analogical Reasoning under Perceptual Uncertainty?

Giacomo Camposampiero, Michael Hersche, Roger Wattenhofer, Abu Sebastian, Abbas Rahimi
Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning, PMLR 284:750-776, 2025.

Abstract

This work presents a first evaluation of two state-of-the-art Large Reasoning Models (LRMs), OpenAI’s o3-mini and DeepSeek R1, on analogical reasoning, focusing on well-established nonverbal human IQ tests based on Raven’s progressive matrices. We benchmark with the I-RAVEN dataset and its extension, I-RAVEN-X, which tests the ability to generalize to longer reasoning rules and ranges of the attribute values. To assess the influence of visual uncertainties on these symbolic analogical reasoning tests, we extend the I-RAVEN-X dataset, which otherwise assumes an oracle perception. We adopt a two-fold strategy to simulate this imperfect visual perception: 1) we introduce confounding attributes which, being sampled at random, do not contribute to the prediction of the correct answer of the puzzles, and 2) smooth the distributions of the input attributes’ values. We observe a sharp decline in OpenAI’s o3-mini task accuracy, dropping from 86.6% on the original I-RAVEN to just 17.0%—approaching random chance—on the more challenging I-RAVEN-X, which increases input length and range and emulates perceptual uncertainty. This drop occurred despite spending 3.4x more reasoning tokens. A similar trend is also observed for DeepSeek R1: from 80.6% to 23.2%. On the other hand, a neuro-symbolic probabilistic abductive model, ARLC, that achieves state-of-the-art performances on I-RAVEN, can robustly reason under all these out-of-distribution tests, maintaining strong accuracy with only a modest accuracy reduction from 98.6% to 88.0%. Our code is available at https://github.com/IBM/raven-large-language-models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v284-camposampiero25a, title = {Can Large Reasoning Models do Analogical Reasoning under Perceptual Uncertainty?}, author = {Camposampiero, Giacomo and Hersche, Michael and Wattenhofer, Roger and Sebastian, Abu and Rahimi, Abbas}, booktitle = {Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning}, pages = {750--776}, year = {2025}, editor = {H. Gilpin, Leilani and Giunchiglia, Eleonora and Hitzler, Pascal and van Krieken, Emile}, volume = {284}, series = {Proceedings of Machine Learning Research}, month = {08--10 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v284/main/assets/camposampiero25a/camposampiero25a.pdf}, url = {https://proceedings.mlr.press/v284/camposampiero25a.html}, abstract = {This work presents a first evaluation of two state-of-the-art Large Reasoning Models (LRMs), OpenAI’s o3-mini and DeepSeek R1, on analogical reasoning, focusing on well-established nonverbal human IQ tests based on Raven’s progressive matrices. We benchmark with the I-RAVEN dataset and its extension, I-RAVEN-X, which tests the ability to generalize to longer reasoning rules and ranges of the attribute values. To assess the influence of visual uncertainties on these symbolic analogical reasoning tests, we extend the I-RAVEN-X dataset, which otherwise assumes an oracle perception. We adopt a two-fold strategy to simulate this imperfect visual perception: 1) we introduce confounding attributes which, being sampled at random, do not contribute to the prediction of the correct answer of the puzzles, and 2) smooth the distributions of the input attributes’ values. We observe a sharp decline in OpenAI’s o3-mini task accuracy, dropping from 86.6% on the original I-RAVEN to just 17.0%—approaching random chance—on the more challenging I-RAVEN-X, which increases input length and range and emulates perceptual uncertainty. This drop occurred despite spending 3.4x more reasoning tokens. A similar trend is also observed for DeepSeek R1: from 80.6% to 23.2%. On the other hand, a neuro-symbolic probabilistic abductive model, ARLC, that achieves state-of-the-art performances on I-RAVEN, can robustly reason under all these out-of-distribution tests, maintaining strong accuracy with only a modest accuracy reduction from 98.6% to 88.0%. Our code is available at https://github.com/IBM/raven-large-language-models.} }
Endnote
%0 Conference Paper %T Can Large Reasoning Models do Analogical Reasoning under Perceptual Uncertainty? %A Giacomo Camposampiero %A Michael Hersche %A Roger Wattenhofer %A Abu Sebastian %A Abbas Rahimi %B Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning %C Proceedings of Machine Learning Research %D 2025 %E Leilani H. Gilpin %E Eleonora Giunchiglia %E Pascal Hitzler %E Emile van Krieken %F pmlr-v284-camposampiero25a %I PMLR %P 750--776 %U https://proceedings.mlr.press/v284/camposampiero25a.html %V 284 %X This work presents a first evaluation of two state-of-the-art Large Reasoning Models (LRMs), OpenAI’s o3-mini and DeepSeek R1, on analogical reasoning, focusing on well-established nonverbal human IQ tests based on Raven’s progressive matrices. We benchmark with the I-RAVEN dataset and its extension, I-RAVEN-X, which tests the ability to generalize to longer reasoning rules and ranges of the attribute values. To assess the influence of visual uncertainties on these symbolic analogical reasoning tests, we extend the I-RAVEN-X dataset, which otherwise assumes an oracle perception. We adopt a two-fold strategy to simulate this imperfect visual perception: 1) we introduce confounding attributes which, being sampled at random, do not contribute to the prediction of the correct answer of the puzzles, and 2) smooth the distributions of the input attributes’ values. We observe a sharp decline in OpenAI’s o3-mini task accuracy, dropping from 86.6% on the original I-RAVEN to just 17.0%—approaching random chance—on the more challenging I-RAVEN-X, which increases input length and range and emulates perceptual uncertainty. This drop occurred despite spending 3.4x more reasoning tokens. A similar trend is also observed for DeepSeek R1: from 80.6% to 23.2%. On the other hand, a neuro-symbolic probabilistic abductive model, ARLC, that achieves state-of-the-art performances on I-RAVEN, can robustly reason under all these out-of-distribution tests, maintaining strong accuracy with only a modest accuracy reduction from 98.6% to 88.0%. Our code is available at https://github.com/IBM/raven-large-language-models.
APA
Camposampiero, G., Hersche, M., Wattenhofer, R., Sebastian, A. & Rahimi, A.. (2025). Can Large Reasoning Models do Analogical Reasoning under Perceptual Uncertainty?. Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning, in Proceedings of Machine Learning Research 284:750-776 Available from https://proceedings.mlr.press/v284/camposampiero25a.html.

Related Material