Structured Evolution with Compact Architectures for Scalable Policy Optimization

Krzysztof Choromanski, Mark Rowland, Vikas Sindhwani, Richard Turner, Adrian Weller
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:970-978, 2018.

Abstract

We present a new method of blackbox optimization via gradient approximation with the use of structured random orthogonal matrices, providing more accurate estimators than baselines and with provable theoretical guarantees. We show that this algorithm can be successfully applied to learn better quality compact policies than those using standard gradient estimation techniques. The compact policies we learn have several advantages over unstructured ones, including faster training algorithms and faster inference. These benefits are important when the policy is deployed on real hardware with limited resources. Further, compact policies provide more scalable architectures for derivative-free optimization (DFO) in high-dimensional spaces. We show that most robotics tasks from the OpenAI Gym can be solved using neural networks with less than 300 parameters, with almost linear time complexity of the inference phase, with up to 13x fewer parameters relative to the Evolution Strategies (ES) algorithm introduced by Salimans et al. (2017). We do not need heuristics such as fitness shaping to learn good quality policies, resulting in a simple and theoretically motivated training mechanism.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-choromanski18a, title = {Structured Evolution with Compact Architectures for Scalable Policy Optimization}, author = {Choromanski, Krzysztof and Rowland, Mark and Sindhwani, Vikas and Turner, Richard and Weller, Adrian}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {970--978}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/choromanski18a/choromanski18a.pdf}, url = {https://proceedings.mlr.press/v80/choromanski18a.html}, abstract = {We present a new method of blackbox optimization via gradient approximation with the use of structured random orthogonal matrices, providing more accurate estimators than baselines and with provable theoretical guarantees. We show that this algorithm can be successfully applied to learn better quality compact policies than those using standard gradient estimation techniques. The compact policies we learn have several advantages over unstructured ones, including faster training algorithms and faster inference. These benefits are important when the policy is deployed on real hardware with limited resources. Further, compact policies provide more scalable architectures for derivative-free optimization (DFO) in high-dimensional spaces. We show that most robotics tasks from the OpenAI Gym can be solved using neural networks with less than 300 parameters, with almost linear time complexity of the inference phase, with up to 13x fewer parameters relative to the Evolution Strategies (ES) algorithm introduced by Salimans et al. (2017). We do not need heuristics such as fitness shaping to learn good quality policies, resulting in a simple and theoretically motivated training mechanism.} }
Endnote
%0 Conference Paper %T Structured Evolution with Compact Architectures for Scalable Policy Optimization %A Krzysztof Choromanski %A Mark Rowland %A Vikas Sindhwani %A Richard Turner %A Adrian Weller %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-choromanski18a %I PMLR %P 970--978 %U https://proceedings.mlr.press/v80/choromanski18a.html %V 80 %X We present a new method of blackbox optimization via gradient approximation with the use of structured random orthogonal matrices, providing more accurate estimators than baselines and with provable theoretical guarantees. We show that this algorithm can be successfully applied to learn better quality compact policies than those using standard gradient estimation techniques. The compact policies we learn have several advantages over unstructured ones, including faster training algorithms and faster inference. These benefits are important when the policy is deployed on real hardware with limited resources. Further, compact policies provide more scalable architectures for derivative-free optimization (DFO) in high-dimensional spaces. We show that most robotics tasks from the OpenAI Gym can be solved using neural networks with less than 300 parameters, with almost linear time complexity of the inference phase, with up to 13x fewer parameters relative to the Evolution Strategies (ES) algorithm introduced by Salimans et al. (2017). We do not need heuristics such as fitness shaping to learn good quality policies, resulting in a simple and theoretically motivated training mechanism.
APA
Choromanski, K., Rowland, M., Sindhwani, V., Turner, R. & Weller, A.. (2018). Structured Evolution with Compact Architectures for Scalable Policy Optimization. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:970-978 Available from https://proceedings.mlr.press/v80/choromanski18a.html.

Related Material