Curriculum Learning by Transfer Learning: Theory and Experiments with Deep Networks

Daphna Weinshall, Gad Cohen, Dan Amir
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:5238-5246, 2018.

Abstract

We provide theoretical investigation of curriculum learning in the context of stochastic gradient descent when optimizing the convex linear regression loss. We prove that the rate of convergence of an ideal curriculum learning method is monotonically increasing with the difficulty of the examples. Moreover, among all equally difficult points, convergence is faster when using points which incur higher loss with respect to the current hypothesis. We then analyze curriculum learning in the context of training a CNN. We describe a method which infers the curriculum by way of transfer learning from another network, pre-trained on a different task. While this approach can only approximate the ideal curriculum, we observe empirically similar behavior to the one predicted by the theory, namely, a significant boost in convergence speed at the beginning of training. When the task is made more difficult, improvement in generalization performance is also observed. Finally, curriculum learning exhibits robustness against unfavorable conditions such as excessive regularization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-weinshall18a, title = {Curriculum Learning by Transfer Learning: Theory and Experiments with Deep Networks}, author = {Weinshall, Daphna and Cohen, Gad and Amir, Dan}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {5238--5246}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/weinshall18a/weinshall18a.pdf}, url = {https://proceedings.mlr.press/v80/weinshall18a.html}, abstract = {We provide theoretical investigation of curriculum learning in the context of stochastic gradient descent when optimizing the convex linear regression loss. We prove that the rate of convergence of an ideal curriculum learning method is monotonically increasing with the difficulty of the examples. Moreover, among all equally difficult points, convergence is faster when using points which incur higher loss with respect to the current hypothesis. We then analyze curriculum learning in the context of training a CNN. We describe a method which infers the curriculum by way of transfer learning from another network, pre-trained on a different task. While this approach can only approximate the ideal curriculum, we observe empirically similar behavior to the one predicted by the theory, namely, a significant boost in convergence speed at the beginning of training. When the task is made more difficult, improvement in generalization performance is also observed. Finally, curriculum learning exhibits robustness against unfavorable conditions such as excessive regularization.} }
Endnote
%0 Conference Paper %T Curriculum Learning by Transfer Learning: Theory and Experiments with Deep Networks %A Daphna Weinshall %A Gad Cohen %A Dan Amir %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-weinshall18a %I PMLR %P 5238--5246 %U https://proceedings.mlr.press/v80/weinshall18a.html %V 80 %X We provide theoretical investigation of curriculum learning in the context of stochastic gradient descent when optimizing the convex linear regression loss. We prove that the rate of convergence of an ideal curriculum learning method is monotonically increasing with the difficulty of the examples. Moreover, among all equally difficult points, convergence is faster when using points which incur higher loss with respect to the current hypothesis. We then analyze curriculum learning in the context of training a CNN. We describe a method which infers the curriculum by way of transfer learning from another network, pre-trained on a different task. While this approach can only approximate the ideal curriculum, we observe empirically similar behavior to the one predicted by the theory, namely, a significant boost in convergence speed at the beginning of training. When the task is made more difficult, improvement in generalization performance is also observed. Finally, curriculum learning exhibits robustness against unfavorable conditions such as excessive regularization.
APA
Weinshall, D., Cohen, G. & Amir, D.. (2018). Curriculum Learning by Transfer Learning: Theory and Experiments with Deep Networks. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:5238-5246 Available from https://proceedings.mlr.press/v80/weinshall18a.html.

Related Material