Comparing neural models using their perceptual discriminability predictions

Jingyang Zhou, Chanwoo Chun, Ajay Subramanian, Eero P Simoncelli
Proceedings of UniReps: the First Workshop on Unifying Representations in Neural Models, PMLR 243:170-181, 2024.

Abstract

A variety of methods have been developed to compare models of visual representation. However, internal representations are not uniquely identifiable from perceptual measurements: different representations can generate identical perceptual predictions, and dissimilar model representations (according to existing model comparison methods) do not guarantee dissimilar perceptual predictions. Here, we generalize a previous method (“eigendistortions” - Berardino et al, 2017) to compare models based on their metric tensors. Metric tensors characterize a model’s sensitivity to stimulus perturbations, reflecting both the geometric and stochastic properties of the representation, and providing an explicit prediction of perceptual discriminability. Brute force comparison of model-predicted metric tensors using human perceptual thresholds would require an impossibly large set of measurements, since one needs to perturb a stimulus in all possible orthogonal directions. To circumvent this “perceptual curse of dimensionality”, we compute and measure discrimination capabilities for a small set of most-informative perturbations, reducing the measurement cost from thousands of hours (a conservative estimate) to a single trial. We show that this single measurement, made for a variety of different test stimuli, is sufficient to differentiate models, select models that better match human perception, or generate new models that combine the advantages of both. We demonstrate the power of this method in assessing two examples: 1) comparing models for color discrimination; 2) comparing autoencoders trained with different regularizers.

Cite this Paper


BibTeX
@InProceedings{pmlr-v243-zhou24a, title = {Comparing neural models using their perceptual discriminability predictions}, author = {Zhou, Jingyang and Chun, Chanwoo and Subramanian, Ajay and Simoncelli, Eero P}, booktitle = {Proceedings of UniReps: the First Workshop on Unifying Representations in Neural Models}, pages = {170--181}, year = {2024}, editor = {Fumero, Marco and Rodolá, Emanuele and Domine, Clementine and Locatello, Francesco and Dziugaite, Karolina and Mathilde, Caron}, volume = {243}, series = {Proceedings of Machine Learning Research}, month = {15 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v243/zhou24a/zhou24a.pdf}, url = {https://proceedings.mlr.press/v243/zhou24a.html}, abstract = {A variety of methods have been developed to compare models of visual representation. However, internal representations are not uniquely identifiable from perceptual measurements: different representations can generate identical perceptual predictions, and dissimilar model representations (according to existing model comparison methods) do not guarantee dissimilar perceptual predictions. Here, we generalize a previous method (“eigendistortions” - Berardino et al, 2017) to compare models based on their metric tensors. Metric tensors characterize a model’s sensitivity to stimulus perturbations, reflecting both the geometric and stochastic properties of the representation, and providing an explicit prediction of perceptual discriminability. Brute force comparison of model-predicted metric tensors using human perceptual thresholds would require an impossibly large set of measurements, since one needs to perturb a stimulus in all possible orthogonal directions. To circumvent this “perceptual curse of dimensionality”, we compute and measure discrimination capabilities for a small set of most-informative perturbations, reducing the measurement cost from thousands of hours (a conservative estimate) to a single trial. We show that this single measurement, made for a variety of different test stimuli, is sufficient to differentiate models, select models that better match human perception, or generate new models that combine the advantages of both. We demonstrate the power of this method in assessing two examples: 1) comparing models for color discrimination; 2) comparing autoencoders trained with different regularizers.} }
Endnote
%0 Conference Paper %T Comparing neural models using their perceptual discriminability predictions %A Jingyang Zhou %A Chanwoo Chun %A Ajay Subramanian %A Eero P Simoncelli %B Proceedings of UniReps: the First Workshop on Unifying Representations in Neural Models %C Proceedings of Machine Learning Research %D 2024 %E Marco Fumero %E Emanuele Rodolá %E Clementine Domine %E Francesco Locatello %E Karolina Dziugaite %E Caron Mathilde %F pmlr-v243-zhou24a %I PMLR %P 170--181 %U https://proceedings.mlr.press/v243/zhou24a.html %V 243 %X A variety of methods have been developed to compare models of visual representation. However, internal representations are not uniquely identifiable from perceptual measurements: different representations can generate identical perceptual predictions, and dissimilar model representations (according to existing model comparison methods) do not guarantee dissimilar perceptual predictions. Here, we generalize a previous method (“eigendistortions” - Berardino et al, 2017) to compare models based on their metric tensors. Metric tensors characterize a model’s sensitivity to stimulus perturbations, reflecting both the geometric and stochastic properties of the representation, and providing an explicit prediction of perceptual discriminability. Brute force comparison of model-predicted metric tensors using human perceptual thresholds would require an impossibly large set of measurements, since one needs to perturb a stimulus in all possible orthogonal directions. To circumvent this “perceptual curse of dimensionality”, we compute and measure discrimination capabilities for a small set of most-informative perturbations, reducing the measurement cost from thousands of hours (a conservative estimate) to a single trial. We show that this single measurement, made for a variety of different test stimuli, is sufficient to differentiate models, select models that better match human perception, or generate new models that combine the advantages of both. We demonstrate the power of this method in assessing two examples: 1) comparing models for color discrimination; 2) comparing autoencoders trained with different regularizers.
APA
Zhou, J., Chun, C., Subramanian, A. & Simoncelli, E.P.. (2024). Comparing neural models using their perceptual discriminability predictions. Proceedings of UniReps: the First Workshop on Unifying Representations in Neural Models, in Proceedings of Machine Learning Research 243:170-181 Available from https://proceedings.mlr.press/v243/zhou24a.html.

Related Material