Modelling Sparse Dynamical Systems with Compressed Predictive State Representations

William L. Hamilton, Mahdi Milani Fard, Joelle Pineau
; Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):178-186, 2013.

Abstract

Efficiently learning accurate models of dynamical systems is of central importance for developing rational agents that can succeed in a wide range of challenging domains. The difficulty of this learning problem is particularly acute in settings with large observation spaces and partial observability. We present a new algorithm, called Compressed Predictive State Representation (CPSR), for learning models of high-dimensional partially observable uncontrolled dynamical systems from small sample sets. The algorithm, which extends previous work on Predictive State Representations, exploits a particular sparse structure present in many domains. This sparse structure is used to compress information during learning, allowing for an increase in both the efficiency and predictive power. The compression technique also relieves the burden of domain specific feature selection and allows for domains with extremely large discrete observation spaces to be efficiently modelled. We present empirical results showing that the algorithm is able to build accurate models more efficiently than its uncompressed counterparts, and provide theoretical results on the accuracy of the learned compressed model.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-hamilton13, title = {Modelling Sparse Dynamical Systems with Compressed Predictive State Representations}, author = {William L. Hamilton and Mahdi Milani Fard and Joelle Pineau}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {178--186}, year = {2013}, editor = {Sanjoy Dasgupta and David McAllester}, volume = {28}, number = {1}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/hamilton13.pdf}, url = {http://proceedings.mlr.press/v28/hamilton13.html}, abstract = {Efficiently learning accurate models of dynamical systems is of central importance for developing rational agents that can succeed in a wide range of challenging domains. The difficulty of this learning problem is particularly acute in settings with large observation spaces and partial observability. We present a new algorithm, called Compressed Predictive State Representation (CPSR), for learning models of high-dimensional partially observable uncontrolled dynamical systems from small sample sets. The algorithm, which extends previous work on Predictive State Representations, exploits a particular sparse structure present in many domains. This sparse structure is used to compress information during learning, allowing for an increase in both the efficiency and predictive power. The compression technique also relieves the burden of domain specific feature selection and allows for domains with extremely large discrete observation spaces to be efficiently modelled. We present empirical results showing that the algorithm is able to build accurate models more efficiently than its uncompressed counterparts, and provide theoretical results on the accuracy of the learned compressed model.} }
Endnote
%0 Conference Paper %T Modelling Sparse Dynamical Systems with Compressed Predictive State Representations %A William L. Hamilton %A Mahdi Milani Fard %A Joelle Pineau %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-hamilton13 %I PMLR %J Proceedings of Machine Learning Research %P 178--186 %U http://proceedings.mlr.press %V 28 %N 1 %W PMLR %X Efficiently learning accurate models of dynamical systems is of central importance for developing rational agents that can succeed in a wide range of challenging domains. The difficulty of this learning problem is particularly acute in settings with large observation spaces and partial observability. We present a new algorithm, called Compressed Predictive State Representation (CPSR), for learning models of high-dimensional partially observable uncontrolled dynamical systems from small sample sets. The algorithm, which extends previous work on Predictive State Representations, exploits a particular sparse structure present in many domains. This sparse structure is used to compress information during learning, allowing for an increase in both the efficiency and predictive power. The compression technique also relieves the burden of domain specific feature selection and allows for domains with extremely large discrete observation spaces to be efficiently modelled. We present empirical results showing that the algorithm is able to build accurate models more efficiently than its uncompressed counterparts, and provide theoretical results on the accuracy of the learned compressed model.
RIS
TY - CPAPER TI - Modelling Sparse Dynamical Systems with Compressed Predictive State Representations AU - William L. Hamilton AU - Mahdi Milani Fard AU - Joelle Pineau BT - Proceedings of the 30th International Conference on Machine Learning PY - 2013/02/13 DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-hamilton13 PB - PMLR SP - 178 DP - PMLR EP - 186 L1 - http://proceedings.mlr.press/v28/hamilton13.pdf UR - http://proceedings.mlr.press/v28/hamilton13.html AB - Efficiently learning accurate models of dynamical systems is of central importance for developing rational agents that can succeed in a wide range of challenging domains. The difficulty of this learning problem is particularly acute in settings with large observation spaces and partial observability. We present a new algorithm, called Compressed Predictive State Representation (CPSR), for learning models of high-dimensional partially observable uncontrolled dynamical systems from small sample sets. The algorithm, which extends previous work on Predictive State Representations, exploits a particular sparse structure present in many domains. This sparse structure is used to compress information during learning, allowing for an increase in both the efficiency and predictive power. The compression technique also relieves the burden of domain specific feature selection and allows for domains with extremely large discrete observation spaces to be efficiently modelled. We present empirical results showing that the algorithm is able to build accurate models more efficiently than its uncompressed counterparts, and provide theoretical results on the accuracy of the learned compressed model. ER -
APA
Hamilton, W.L., Fard, M.M. & Pineau, J.. (2013). Modelling Sparse Dynamical Systems with Compressed Predictive State Representations. Proceedings of the 30th International Conference on Machine Learning, in PMLR 28(1):178-186

Related Material