LEEP: A New Measure to Evaluate Transferability of Learned Representations

Cuong Nguyen, Tal Hassner, Matthias Seeger, Cedric Archambeau
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:7294-7305, 2020.

Abstract

We introduce a new measure to evaluate the transferability of representations learned by classifiers. Our measure, the Log Expected Empirical Prediction (LEEP), is simple and easy to compute: when given a classifier trained on a source data set, it only requires running the target data set through this classifier once. We analyze the properties of LEEP theoretically and demonstrate its effectiveness empirically. Our analysis shows that LEEP can predict the performance and convergence speed of both transfer and meta-transfer learning methods, even for small or imbalanced data. Moreover, LEEP outperforms recently proposed transferability measures such as negative conditional entropy and H scores. Notably, when transferring from ImageNet to CIFAR100, LEEP can achieve up to 30% improvement compared to the best competing method in terms of the correlations with actual transfer accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-nguyen20b, title = {{LEEP}: A New Measure to Evaluate Transferability of Learned Representations}, author = {Nguyen, Cuong and Hassner, Tal and Seeger, Matthias and Archambeau, Cedric}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {7294--7305}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/nguyen20b/nguyen20b.pdf}, url = {https://proceedings.mlr.press/v119/nguyen20b.html}, abstract = {We introduce a new measure to evaluate the transferability of representations learned by classifiers. Our measure, the Log Expected Empirical Prediction (LEEP), is simple and easy to compute: when given a classifier trained on a source data set, it only requires running the target data set through this classifier once. We analyze the properties of LEEP theoretically and demonstrate its effectiveness empirically. Our analysis shows that LEEP can predict the performance and convergence speed of both transfer and meta-transfer learning methods, even for small or imbalanced data. Moreover, LEEP outperforms recently proposed transferability measures such as negative conditional entropy and H scores. Notably, when transferring from ImageNet to CIFAR100, LEEP can achieve up to 30% improvement compared to the best competing method in terms of the correlations with actual transfer accuracy.} }
Endnote
%0 Conference Paper %T LEEP: A New Measure to Evaluate Transferability of Learned Representations %A Cuong Nguyen %A Tal Hassner %A Matthias Seeger %A Cedric Archambeau %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-nguyen20b %I PMLR %P 7294--7305 %U https://proceedings.mlr.press/v119/nguyen20b.html %V 119 %X We introduce a new measure to evaluate the transferability of representations learned by classifiers. Our measure, the Log Expected Empirical Prediction (LEEP), is simple and easy to compute: when given a classifier trained on a source data set, it only requires running the target data set through this classifier once. We analyze the properties of LEEP theoretically and demonstrate its effectiveness empirically. Our analysis shows that LEEP can predict the performance and convergence speed of both transfer and meta-transfer learning methods, even for small or imbalanced data. Moreover, LEEP outperforms recently proposed transferability measures such as negative conditional entropy and H scores. Notably, when transferring from ImageNet to CIFAR100, LEEP can achieve up to 30% improvement compared to the best competing method in terms of the correlations with actual transfer accuracy.
APA
Nguyen, C., Hassner, T., Seeger, M. & Archambeau, C.. (2020). LEEP: A New Measure to Evaluate Transferability of Learned Representations. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:7294-7305 Available from https://proceedings.mlr.press/v119/nguyen20b.html.

Related Material