Pitfalls in Measuring Neural Transferability

Suresh Suryaka, Abrol Vinayak, Thakur Anshul
Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations, PMLR 228:279-291, 2024.

Abstract

Transferability scores quantify the aptness of the pre-trained models for a downstream task and help in selecting an optimal pre-trained model for transfer learning. This work aims to draw attention to the significant shortcomings of state-of-the-art transferability scores. To this aim, we propose \emph{neural collapse-based transferability scores} that analyse intra-class \emph{variability collapse} and inter-class discriminative ability of the penultimate embedding space of a pre-trained model. The experimentation across the image and audio domains demonstrates that such a simple variability analysis of the feature space is sufficient to satisfy the current definition of transferability scores, and there is a requirement for a new generic definition of transferability. Further, building on these results, we highlight new research directions and postulate characteristics of an ideal transferability measure that will be helpful in streamlining future studies targeting this problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v228-suryaka24a, title = {Pitfalls in Measuring Neural Transferability}, author = {Suryaka, Suresh and Vinayak, Abrol and Anshul, Thakur}, booktitle = {Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations}, pages = {279--291}, year = {2024}, editor = {Sanborn, Sophia and Shewmake, Christian and Azeglio, Simone and Miolane, Nina}, volume = {228}, series = {Proceedings of Machine Learning Research}, month = {16 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v228/main/assets/suryaka24a/suryaka24a.pdf}, url = {https://proceedings.mlr.press/v228/suryaka24a.html}, abstract = {Transferability scores quantify the aptness of the pre-trained models for a downstream task and help in selecting an optimal pre-trained model for transfer learning. This work aims to draw attention to the significant shortcomings of state-of-the-art transferability scores. To this aim, we propose \emph{neural collapse-based transferability scores} that analyse intra-class \emph{variability collapse} and inter-class discriminative ability of the penultimate embedding space of a pre-trained model. The experimentation across the image and audio domains demonstrates that such a simple variability analysis of the feature space is sufficient to satisfy the current definition of transferability scores, and there is a requirement for a new generic definition of transferability. Further, building on these results, we highlight new research directions and postulate characteristics of an ideal transferability measure that will be helpful in streamlining future studies targeting this problem.} }
Endnote
%0 Conference Paper %T Pitfalls in Measuring Neural Transferability %A Suresh Suryaka %A Abrol Vinayak %A Thakur Anshul %B Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations %C Proceedings of Machine Learning Research %D 2024 %E Sophia Sanborn %E Christian Shewmake %E Simone Azeglio %E Nina Miolane %F pmlr-v228-suryaka24a %I PMLR %P 279--291 %U https://proceedings.mlr.press/v228/suryaka24a.html %V 228 %X Transferability scores quantify the aptness of the pre-trained models for a downstream task and help in selecting an optimal pre-trained model for transfer learning. This work aims to draw attention to the significant shortcomings of state-of-the-art transferability scores. To this aim, we propose \emph{neural collapse-based transferability scores} that analyse intra-class \emph{variability collapse} and inter-class discriminative ability of the penultimate embedding space of a pre-trained model. The experimentation across the image and audio domains demonstrates that such a simple variability analysis of the feature space is sufficient to satisfy the current definition of transferability scores, and there is a requirement for a new generic definition of transferability. Further, building on these results, we highlight new research directions and postulate characteristics of an ideal transferability measure that will be helpful in streamlining future studies targeting this problem.
APA
Suryaka, S., Vinayak, A. & Anshul, T.. (2024). Pitfalls in Measuring Neural Transferability. Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations, in Proceedings of Machine Learning Research 228:279-291 Available from https://proceedings.mlr.press/v228/suryaka24a.html.

Related Material