Benchmarking Learning Efficiency in Deep Reservoir Computing

Hugo Cisneros, Tomas Mikolov, Josef Sivic
Proceedings of The 1st Conference on Lifelong Learning Agents, PMLR 199:532-547, 2022.

Abstract

It is common to evaluate the performance of a machine learning model by measuring its predictive power on a test dataset. This approach favors complicated models that can smoothly fit complex functions and generalize well from training data points. Although essential components of intelligence, speed and data efficiency of this learning process are rarely reported or compared between different candidate models. In this paper, we introduce a benchmark of increasingly difficult tasks together with a data efficiency metric to measure how quickly machine learning models learn from training data. We compare the learning speed of some established sequential supervised models, such as RNNs, LSTMs, or Transformers, with relatively less known alternative models based on reservoir computing. The proposed tasks require a wide range of computational primitives, such as memory or the ability to compute Boolean functions, to be effectively solved. Surprisingly, we observe that reservoir computing systems that rely on dynamically evolving feature maps learn faster than fully supervised methods trained with stochastic gradient optimization while achieving comparable accuracy scores. The code, benchmark, trained models, and results to reproduce our experiments are available at \url{https://github.com/hugcis/benchmark_learning_efficiency}.

Cite this Paper


BibTeX
@InProceedings{pmlr-v199-cisneros22a, title = {Benchmarking Learning Efficiency in Deep Reservoir Computing}, author = {Cisneros, Hugo and Mikolov, Tomas and Sivic, Josef}, booktitle = {Proceedings of The 1st Conference on Lifelong Learning Agents}, pages = {532--547}, year = {2022}, editor = {Chandar, Sarath and Pascanu, Razvan and Precup, Doina}, volume = {199}, series = {Proceedings of Machine Learning Research}, month = {22--24 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v199/cisneros22a/cisneros22a.pdf}, url = {https://proceedings.mlr.press/v199/cisneros22a.html}, abstract = {It is common to evaluate the performance of a machine learning model by measuring its predictive power on a test dataset. This approach favors complicated models that can smoothly fit complex functions and generalize well from training data points. Although essential components of intelligence, speed and data efficiency of this learning process are rarely reported or compared between different candidate models. In this paper, we introduce a benchmark of increasingly difficult tasks together with a data efficiency metric to measure how quickly machine learning models learn from training data. We compare the learning speed of some established sequential supervised models, such as RNNs, LSTMs, or Transformers, with relatively less known alternative models based on reservoir computing. The proposed tasks require a wide range of computational primitives, such as memory or the ability to compute Boolean functions, to be effectively solved. Surprisingly, we observe that reservoir computing systems that rely on dynamically evolving feature maps learn faster than fully supervised methods trained with stochastic gradient optimization while achieving comparable accuracy scores. The code, benchmark, trained models, and results to reproduce our experiments are available at \url{https://github.com/hugcis/benchmark_learning_efficiency}.} }
Endnote
%0 Conference Paper %T Benchmarking Learning Efficiency in Deep Reservoir Computing %A Hugo Cisneros %A Tomas Mikolov %A Josef Sivic %B Proceedings of The 1st Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2022 %E Sarath Chandar %E Razvan Pascanu %E Doina Precup %F pmlr-v199-cisneros22a %I PMLR %P 532--547 %U https://proceedings.mlr.press/v199/cisneros22a.html %V 199 %X It is common to evaluate the performance of a machine learning model by measuring its predictive power on a test dataset. This approach favors complicated models that can smoothly fit complex functions and generalize well from training data points. Although essential components of intelligence, speed and data efficiency of this learning process are rarely reported or compared between different candidate models. In this paper, we introduce a benchmark of increasingly difficult tasks together with a data efficiency metric to measure how quickly machine learning models learn from training data. We compare the learning speed of some established sequential supervised models, such as RNNs, LSTMs, or Transformers, with relatively less known alternative models based on reservoir computing. The proposed tasks require a wide range of computational primitives, such as memory or the ability to compute Boolean functions, to be effectively solved. Surprisingly, we observe that reservoir computing systems that rely on dynamically evolving feature maps learn faster than fully supervised methods trained with stochastic gradient optimization while achieving comparable accuracy scores. The code, benchmark, trained models, and results to reproduce our experiments are available at \url{https://github.com/hugcis/benchmark_learning_efficiency}.
APA
Cisneros, H., Mikolov, T. & Sivic, J.. (2022). Benchmarking Learning Efficiency in Deep Reservoir Computing. Proceedings of The 1st Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 199:532-547 Available from https://proceedings.mlr.press/v199/cisneros22a.html.

Related Material