[edit]
Pitfalls in Measuring Neural Transferability
Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations, PMLR 228:279-291, 2024.
Abstract
Transferability scores quantify the aptness of the pre-trained models for a downstream task and help in selecting an optimal pre-trained model for transfer learning. This work aims to draw attention to the significant shortcomings of state-of-the-art transferability scores. To this aim, we propose \emph{neural collapse-based transferability scores} that analyse intra-class \emph{variability collapse} and inter-class discriminative ability of the penultimate embedding space of a pre-trained model. The experimentation across the image and audio domains demonstrates that such a simple variability analysis of the feature space is sufficient to satisfy the current definition of transferability scores, and there is a requirement for a new generic definition of transferability. Further, building on these results, we highlight new research directions and postulate characteristics of an ideal transferability measure that will be helpful in streamlining future studies targeting this problem.