Smooth Imitation Learning for Online Sequence Prediction

Hoang Le, Andrew Kang, Yisong Yue, Peter Carr
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:680-688, 2016.

Abstract

We study the problem of smooth imitation learning for online sequence prediction, where the goal is to train a policy that can smoothly imitate demonstrated behavior in a dynamic and continuous environment in response to online, sequential context input. Since the mapping from context to behavior is often complex, we take a learning reduction approach to reduce smooth imitation learning to a regression problem using complex function classes that are regularized to ensure smoothness. We present a learning meta-algorithm that achieves fast and stable convergence to a good policy. Our approach enjoys several attractive properties, including being fully deterministic, employing an adaptive learning rate that can provably yield larger policy improvements compared to previous approaches, and the ability to ensure stable convergence. Our empirical results demonstrate significant performance gains over previous approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-le16, title = {Smooth Imitation Learning for Online Sequence Prediction}, author = {Le, Hoang and Kang, Andrew and Yue, Yisong and Carr, Peter}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {680--688}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/le16.pdf}, url = {https://proceedings.mlr.press/v48/le16.html}, abstract = {We study the problem of smooth imitation learning for online sequence prediction, where the goal is to train a policy that can smoothly imitate demonstrated behavior in a dynamic and continuous environment in response to online, sequential context input. Since the mapping from context to behavior is often complex, we take a learning reduction approach to reduce smooth imitation learning to a regression problem using complex function classes that are regularized to ensure smoothness. We present a learning meta-algorithm that achieves fast and stable convergence to a good policy. Our approach enjoys several attractive properties, including being fully deterministic, employing an adaptive learning rate that can provably yield larger policy improvements compared to previous approaches, and the ability to ensure stable convergence. Our empirical results demonstrate significant performance gains over previous approaches.} }
Endnote
%0 Conference Paper %T Smooth Imitation Learning for Online Sequence Prediction %A Hoang Le %A Andrew Kang %A Yisong Yue %A Peter Carr %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-le16 %I PMLR %P 680--688 %U https://proceedings.mlr.press/v48/le16.html %V 48 %X We study the problem of smooth imitation learning for online sequence prediction, where the goal is to train a policy that can smoothly imitate demonstrated behavior in a dynamic and continuous environment in response to online, sequential context input. Since the mapping from context to behavior is often complex, we take a learning reduction approach to reduce smooth imitation learning to a regression problem using complex function classes that are regularized to ensure smoothness. We present a learning meta-algorithm that achieves fast and stable convergence to a good policy. Our approach enjoys several attractive properties, including being fully deterministic, employing an adaptive learning rate that can provably yield larger policy improvements compared to previous approaches, and the ability to ensure stable convergence. Our empirical results demonstrate significant performance gains over previous approaches.
RIS
TY - CPAPER TI - Smooth Imitation Learning for Online Sequence Prediction AU - Hoang Le AU - Andrew Kang AU - Yisong Yue AU - Peter Carr BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-le16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 680 EP - 688 L1 - http://proceedings.mlr.press/v48/le16.pdf UR - https://proceedings.mlr.press/v48/le16.html AB - We study the problem of smooth imitation learning for online sequence prediction, where the goal is to train a policy that can smoothly imitate demonstrated behavior in a dynamic and continuous environment in response to online, sequential context input. Since the mapping from context to behavior is often complex, we take a learning reduction approach to reduce smooth imitation learning to a regression problem using complex function classes that are regularized to ensure smoothness. We present a learning meta-algorithm that achieves fast and stable convergence to a good policy. Our approach enjoys several attractive properties, including being fully deterministic, employing an adaptive learning rate that can provably yield larger policy improvements compared to previous approaches, and the ability to ensure stable convergence. Our empirical results demonstrate significant performance gains over previous approaches. ER -
APA
Le, H., Kang, A., Yue, Y. & Carr, P.. (2016). Smooth Imitation Learning for Online Sequence Prediction. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:680-688 Available from https://proceedings.mlr.press/v48/le16.html.

Related Material