Estimating Q(s,s’) with Deep Deterministic Dynamics Gradients

Ashley Edwards, Himanshu Sahni, Rosanne Liu, Jane Hung, Ankit Jain, Rui Wang, Adrien Ecoffet, Thomas Miconi, Charles Isbell, Jason Yosinski
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:2825-2835, 2020.

Abstract

In this paper, we introduce a novel form of value function, $Q(s, s’)$, that expresses the utility of transitioning from a state $s$ to a neighboring state $s’$ and then acting optimally thereafter. In order to derive an optimal policy, we develop a forward dynamics model that learns to make next-state predictions that maximize this value. This formulation decouples actions from values while still learning off-policy. We highlight the benefits of this approach in terms of value function transfer, learning within redundant action spaces, and learning off-policy from state observations generated by sub-optimal or completely random policies. Code and videos are available at http://sites.google.com/view/qss-paper.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-edwards20a, title = {Estimating Q(s,s’) with Deep Deterministic Dynamics Gradients}, author = {Edwards, Ashley and Sahni, Himanshu and Liu, Rosanne and Hung, Jane and Jain, Ankit and Wang, Rui and Ecoffet, Adrien and Miconi, Thomas and Isbell, Charles and Yosinski, Jason}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {2825--2835}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/edwards20a/edwards20a.pdf}, url = {http://proceedings.mlr.press/v119/edwards20a.html}, abstract = {In this paper, we introduce a novel form of value function, $Q(s, s’)$, that expresses the utility of transitioning from a state $s$ to a neighboring state $s’$ and then acting optimally thereafter. In order to derive an optimal policy, we develop a forward dynamics model that learns to make next-state predictions that maximize this value. This formulation decouples actions from values while still learning off-policy. We highlight the benefits of this approach in terms of value function transfer, learning within redundant action spaces, and learning off-policy from state observations generated by sub-optimal or completely random policies. Code and videos are available at http://sites.google.com/view/qss-paper.} }
Endnote
%0 Conference Paper %T Estimating Q(s,s’) with Deep Deterministic Dynamics Gradients %A Ashley Edwards %A Himanshu Sahni %A Rosanne Liu %A Jane Hung %A Ankit Jain %A Rui Wang %A Adrien Ecoffet %A Thomas Miconi %A Charles Isbell %A Jason Yosinski %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-edwards20a %I PMLR %P 2825--2835 %U http://proceedings.mlr.press/v119/edwards20a.html %V 119 %X In this paper, we introduce a novel form of value function, $Q(s, s’)$, that expresses the utility of transitioning from a state $s$ to a neighboring state $s’$ and then acting optimally thereafter. In order to derive an optimal policy, we develop a forward dynamics model that learns to make next-state predictions that maximize this value. This formulation decouples actions from values while still learning off-policy. We highlight the benefits of this approach in terms of value function transfer, learning within redundant action spaces, and learning off-policy from state observations generated by sub-optimal or completely random policies. Code and videos are available at http://sites.google.com/view/qss-paper.
APA
Edwards, A., Sahni, H., Liu, R., Hung, J., Jain, A., Wang, R., Ecoffet, A., Miconi, T., Isbell, C. & Yosinski, J.. (2020). Estimating Q(s,s’) with Deep Deterministic Dynamics Gradients. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:2825-2835 Available from http://proceedings.mlr.press/v119/edwards20a.html.

Related Material