Latent Goal Analysis for Dimension Reduction in Reinforcement Learning

Matthias Rolf, Minoru Asada
; Proceedings of The 4th Workshop on Machine Learning for Interactive Systems at ICML 2015, PMLR 43:26-30, 2015.

Abstract

In contrast to reinforcement learning, adaptive control formulations [Nguyen-Tuong and Peters, 2011] already come with expressive and typically low-dimensional goal and task representations, which have been generally considered more expressive than the RL setting [Kaelbling et al., 1996]. Goal and actual values in motor control define a relation similar [Rolf and Steil, 2014] to actual and target outputs in classical supervised learning settings by providing “directional information” in contrast to a mere “magnitude of an error” in reinforcement learning [Barto, 1994]. Recent work [Rolf and Asada, 2014] however showed that these two problem formulations can be transformed into each other. Hence, highly descriptive task representations can be extracted out of reinforcement learning problems by transforming them into adaptive control problems. After introducing the method called Latent Goal Analysis, we discuss the possible application of this approach as dimension reduction technique in reinforcement learning. Experimental results in a web recommender scenario confirm the potential of this technique.

Cite this Paper


BibTeX
@InProceedings{pmlr-v43-rolf15, title = {Latent Goal Analysis for Dimension Reduction in Reinforcement Learning}, author = {Matthias Rolf and Minoru Asada}, booktitle = {Proceedings of The 4th Workshop on Machine Learning for Interactive Systems at ICML 2015}, pages = {26--30}, year = {2015}, editor = {Heriberto Cuayáhuitl and Nina Dethlefs and Lutz Frommberger and Martijn Van Otterlo and Olivier Pietquin}, volume = {43}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {11 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v43/rolf15.pdf}, url = {http://proceedings.mlr.press/v43/rolf15.html}, abstract = {In contrast to reinforcement learning, adaptive control formulations [Nguyen-Tuong and Peters, 2011] already come with expressive and typically low-dimensional goal and task representations, which have been generally considered more expressive than the RL setting [Kaelbling et al., 1996]. Goal and actual values in motor control define a relation similar [Rolf and Steil, 2014] to actual and target outputs in classical supervised learning settings by providing “directional information” in contrast to a mere “magnitude of an error” in reinforcement learning [Barto, 1994]. Recent work [Rolf and Asada, 2014] however showed that these two problem formulations can be transformed into each other. Hence, highly descriptive task representations can be extracted out of reinforcement learning problems by transforming them into adaptive control problems. After introducing the method called Latent Goal Analysis, we discuss the possible application of this approach as dimension reduction technique in reinforcement learning. Experimental results in a web recommender scenario confirm the potential of this technique.} }
Endnote
%0 Conference Paper %T Latent Goal Analysis for Dimension Reduction in Reinforcement Learning %A Matthias Rolf %A Minoru Asada %B Proceedings of The 4th Workshop on Machine Learning for Interactive Systems at ICML 2015 %C Proceedings of Machine Learning Research %D 2015 %E Heriberto Cuayáhuitl %E Nina Dethlefs %E Lutz Frommberger %E Martijn Van Otterlo %E Olivier Pietquin %F pmlr-v43-rolf15 %I PMLR %J Proceedings of Machine Learning Research %P 26--30 %U http://proceedings.mlr.press %V 43 %W PMLR %X In contrast to reinforcement learning, adaptive control formulations [Nguyen-Tuong and Peters, 2011] already come with expressive and typically low-dimensional goal and task representations, which have been generally considered more expressive than the RL setting [Kaelbling et al., 1996]. Goal and actual values in motor control define a relation similar [Rolf and Steil, 2014] to actual and target outputs in classical supervised learning settings by providing “directional information” in contrast to a mere “magnitude of an error” in reinforcement learning [Barto, 1994]. Recent work [Rolf and Asada, 2014] however showed that these two problem formulations can be transformed into each other. Hence, highly descriptive task representations can be extracted out of reinforcement learning problems by transforming them into adaptive control problems. After introducing the method called Latent Goal Analysis, we discuss the possible application of this approach as dimension reduction technique in reinforcement learning. Experimental results in a web recommender scenario confirm the potential of this technique.
RIS
TY - CPAPER TI - Latent Goal Analysis for Dimension Reduction in Reinforcement Learning AU - Matthias Rolf AU - Minoru Asada BT - Proceedings of The 4th Workshop on Machine Learning for Interactive Systems at ICML 2015 PY - 2015/06/18 DA - 2015/06/18 ED - Heriberto Cuayáhuitl ED - Nina Dethlefs ED - Lutz Frommberger ED - Martijn Van Otterlo ED - Olivier Pietquin ID - pmlr-v43-rolf15 PB - PMLR SP - 26 DP - PMLR EP - 30 L1 - http://proceedings.mlr.press/v43/rolf15.pdf UR - http://proceedings.mlr.press/v43/rolf15.html AB - In contrast to reinforcement learning, adaptive control formulations [Nguyen-Tuong and Peters, 2011] already come with expressive and typically low-dimensional goal and task representations, which have been generally considered more expressive than the RL setting [Kaelbling et al., 1996]. Goal and actual values in motor control define a relation similar [Rolf and Steil, 2014] to actual and target outputs in classical supervised learning settings by providing “directional information” in contrast to a mere “magnitude of an error” in reinforcement learning [Barto, 1994]. Recent work [Rolf and Asada, 2014] however showed that these two problem formulations can be transformed into each other. Hence, highly descriptive task representations can be extracted out of reinforcement learning problems by transforming them into adaptive control problems. After introducing the method called Latent Goal Analysis, we discuss the possible application of this approach as dimension reduction technique in reinforcement learning. Experimental results in a web recommender scenario confirm the potential of this technique. ER -
APA
Rolf, M. & Asada, M.. (2015). Latent Goal Analysis for Dimension Reduction in Reinforcement Learning. Proceedings of The 4th Workshop on Machine Learning for Interactive Systems at ICML 2015, in PMLR 43:26-30

Related Material