Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):495-503, 2013.
Abstract
In this paper we extend temporal difference policy evaluation algorithms to performance criteria that include the variance of the cumulative reward. Such criteria are useful for risk management, and are important in domains such as finance and process control. We propose variants of both TD(0) and LSTD(λ) with linear function approximation, prove their convergence, and demonstrate their utility in a 4-dimensional continuous state space problem.
@InProceedings{pmlr-v28-tamar13,
title = {Temporal Difference Methods for the Variance of the Reward To Go},
author = {Aviv Tamar and Dotan Di Castro and Shie Mannor},
booktitle = {Proceedings of the 30th International Conference on Machine Learning},
pages = {495--503},
year = {2013},
editor = {Sanjoy Dasgupta and David McAllester},
volume = {28},
number = {3},
series = {Proceedings of Machine Learning Research},
address = {Atlanta, Georgia, USA},
month = {17--19 Jun},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v28/tamar13.pdf},
url = {http://proceedings.mlr.press/v28/tamar13.html},
abstract = {In this paper we extend temporal difference policy evaluation algorithms to performance criteria that include the variance of the cumulative reward. Such criteria are useful for risk management, and are important in domains such as finance and process control. We propose variants of both TD(0) and LSTD(λ) with linear function approximation, prove their convergence, and demonstrate their utility in a 4-dimensional continuous state space problem.}
}
%0 Conference Paper
%T Temporal Difference Methods for the Variance of the Reward To Go
%A Aviv Tamar
%A Dotan Di Castro
%A Shie Mannor
%B Proceedings of the 30th International Conference on Machine Learning
%C Proceedings of Machine Learning Research
%D 2013
%E Sanjoy Dasgupta
%E David McAllester
%F pmlr-v28-tamar13
%I PMLR
%J Proceedings of Machine Learning Research
%P 495--503
%U http://proceedings.mlr.press
%V 28
%N 3
%W PMLR
%X In this paper we extend temporal difference policy evaluation algorithms to performance criteria that include the variance of the cumulative reward. Such criteria are useful for risk management, and are important in domains such as finance and process control. We propose variants of both TD(0) and LSTD(λ) with linear function approximation, prove their convergence, and demonstrate their utility in a 4-dimensional continuous state space problem.
TY - CPAPER
TI - Temporal Difference Methods for the Variance of the Reward To Go
AU - Aviv Tamar
AU - Dotan Di Castro
AU - Shie Mannor
BT - Proceedings of the 30th International Conference on Machine Learning
PY - 2013/02/13
DA - 2013/02/13
ED - Sanjoy Dasgupta
ED - David McAllester
ID - pmlr-v28-tamar13
PB - PMLR
SP - 495
DP - PMLR
EP - 503
L1 - http://proceedings.mlr.press/v28/tamar13.pdf
UR - http://proceedings.mlr.press/v28/tamar13.html
AB - In this paper we extend temporal difference policy evaluation algorithms to performance criteria that include the variance of the cumulative reward. Such criteria are useful for risk management, and are important in domains such as finance and process control. We propose variants of both TD(0) and LSTD(λ) with linear function approximation, prove their convergence, and demonstrate their utility in a 4-dimensional continuous state space problem.
ER -
Tamar, A., Di Castro, D. & Mannor, S.. (2013). Temporal Difference Methods for the Variance of the Reward To Go. Proceedings of the 30th International Conference on Machine Learning, in PMLR 28(3):495-503
This site last compiled Tue, 14 Nov 2017 21:10:07 +0000