Contour Integration Underlies Human-Like Vision

Ben Lonnqvist, Elsa Scialom, Abdulkadir Gokce, Zehra Merchant, Michael Herzog, Martin Schrimpf
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:40290-40311, 2025.

Abstract

Despite the tremendous success of deep learning in computer vision, models still fall behind humans in generalizing to new input distributions. Existing benchmarks do not investigate the specific failure points of models by analyzing performance under many controlled conditions. Our study systematically dissects where and why models struggle with contour integration - a hallmark of human vision – by designing an experiment that tests object recognition under various levels of object fragmentation. Humans (n=50) perform at high accuracy, even with few object contours present. This is in contrast to models which exhibit substantially lower sensitivity to increasing object contours, with most of the over 1,000 models we tested barely performing above chance. Only at very large scales ($\sim5B$ training dataset size) do models begin to approach human performance. Importantly, humans exhibit an integration bias - a preference towards recognizing objects made up of directional fragments over directionless fragments. We find that not only do models that share this property perform better at our task, but that this bias also increases with model training dataset size, and training models to exhibit contour integration leads to high shape bias. Taken together, our results suggest that contour integration is a hallmark of object vision that underlies object recognition performance, and may be a mechanism learned from data at scale.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-lonnqvist25a, title = {Contour Integration Underlies Human-Like Vision}, author = {Lonnqvist, Ben and Scialom, Elsa and Gokce, Abdulkadir and Merchant, Zehra and Herzog, Michael and Schrimpf, Martin}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {40290--40311}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/lonnqvist25a/lonnqvist25a.pdf}, url = {https://proceedings.mlr.press/v267/lonnqvist25a.html}, abstract = {Despite the tremendous success of deep learning in computer vision, models still fall behind humans in generalizing to new input distributions. Existing benchmarks do not investigate the specific failure points of models by analyzing performance under many controlled conditions. Our study systematically dissects where and why models struggle with contour integration - a hallmark of human vision – by designing an experiment that tests object recognition under various levels of object fragmentation. Humans (n=50) perform at high accuracy, even with few object contours present. This is in contrast to models which exhibit substantially lower sensitivity to increasing object contours, with most of the over 1,000 models we tested barely performing above chance. Only at very large scales ($\sim5B$ training dataset size) do models begin to approach human performance. Importantly, humans exhibit an integration bias - a preference towards recognizing objects made up of directional fragments over directionless fragments. We find that not only do models that share this property perform better at our task, but that this bias also increases with model training dataset size, and training models to exhibit contour integration leads to high shape bias. Taken together, our results suggest that contour integration is a hallmark of object vision that underlies object recognition performance, and may be a mechanism learned from data at scale.} }
Endnote
%0 Conference Paper %T Contour Integration Underlies Human-Like Vision %A Ben Lonnqvist %A Elsa Scialom %A Abdulkadir Gokce %A Zehra Merchant %A Michael Herzog %A Martin Schrimpf %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-lonnqvist25a %I PMLR %P 40290--40311 %U https://proceedings.mlr.press/v267/lonnqvist25a.html %V 267 %X Despite the tremendous success of deep learning in computer vision, models still fall behind humans in generalizing to new input distributions. Existing benchmarks do not investigate the specific failure points of models by analyzing performance under many controlled conditions. Our study systematically dissects where and why models struggle with contour integration - a hallmark of human vision – by designing an experiment that tests object recognition under various levels of object fragmentation. Humans (n=50) perform at high accuracy, even with few object contours present. This is in contrast to models which exhibit substantially lower sensitivity to increasing object contours, with most of the over 1,000 models we tested barely performing above chance. Only at very large scales ($\sim5B$ training dataset size) do models begin to approach human performance. Importantly, humans exhibit an integration bias - a preference towards recognizing objects made up of directional fragments over directionless fragments. We find that not only do models that share this property perform better at our task, but that this bias also increases with model training dataset size, and training models to exhibit contour integration leads to high shape bias. Taken together, our results suggest that contour integration is a hallmark of object vision that underlies object recognition performance, and may be a mechanism learned from data at scale.
APA
Lonnqvist, B., Scialom, E., Gokce, A., Merchant, Z., Herzog, M. & Schrimpf, M.. (2025). Contour Integration Underlies Human-Like Vision. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:40290-40311 Available from https://proceedings.mlr.press/v267/lonnqvist25a.html.

Related Material