[edit]
Comparing neural models using their perceptual discriminability predictions
Proceedings of UniReps: the First Workshop on Unifying Representations in Neural Models, PMLR 243:170-181, 2024.
Abstract
A variety of methods have been developed to compare models of visual representation. However, internal representations are not uniquely identifiable from perceptual measurements: different representations can generate identical perceptual predictions, and dissimilar model representations (according to existing model comparison methods) do not guarantee dissimilar perceptual predictions. Here, we generalize a previous method (“eigendistortions” - Berardino et al, 2017) to compare models based on their metric tensors. Metric tensors characterize a model’s sensitivity to stimulus perturbations, reflecting both the geometric and stochastic properties of the representation, and providing an explicit prediction of perceptual discriminability. Brute force comparison of model-predicted metric tensors using human perceptual thresholds would require an impossibly large set of measurements, since one needs to perturb a stimulus in all possible orthogonal directions. To circumvent this “perceptual curse of dimensionality”, we compute and measure discrimination capabilities for a small set of most-informative perturbations, reducing the measurement cost from thousands of hours (a conservative estimate) to a single trial. We show that this single measurement, made for a variety of different test stimuli, is sufficient to differentiate models, select models that better match human perception, or generate new models that combine the advantages of both. We demonstrate the power of this method in assessing two examples: 1) comparing models for color discrimination; 2) comparing autoencoders trained with different regularizers.