Learning Calibratable Policies using Programmatic Style-Consistency

Eric Zhan, Albert Tseng, Yisong Yue, Adith Swaminathan, Matthew Hausknecht
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:11001-11011, 2020.

Abstract

We study the problem of controllable generation of long-term sequential behaviors, where the goal is to calibrate to multiple behavior styles simultaneously. In contrast to the well-studied areas of controllable generation of images, text, and speech, there are two questions that pose significant challenges when generating long-term behaviors: how should we specify the factors of variation to control, and how can we ensure that the generated behavior faithfully demonstrates combinatorially many styles? We leverage programmatic labeling functions to specify controllable styles, and derive a formal notion of style-consistency as a learning objective, which can then be solved using conventional policy learning approaches. We evaluate our framework using demonstrations from professional basketball players and agents in the MuJoCo physics environment, and show that existing approaches that do not explicitly enforce style-consistency fail to generate diverse behaviors whereas our learned policies can be calibrated for up to $4^5 (1024)$ distinct style combinations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-zhan20a, title = {Learning Calibratable Policies using Programmatic Style-Consistency}, author = {Zhan, Eric and Tseng, Albert and Yue, Yisong and Swaminathan, Adith and Hausknecht, Matthew}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {11001--11011}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/zhan20a/zhan20a.pdf}, url = {https://proceedings.mlr.press/v119/zhan20a.html}, abstract = {We study the problem of controllable generation of long-term sequential behaviors, where the goal is to calibrate to multiple behavior styles simultaneously. In contrast to the well-studied areas of controllable generation of images, text, and speech, there are two questions that pose significant challenges when generating long-term behaviors: how should we specify the factors of variation to control, and how can we ensure that the generated behavior faithfully demonstrates combinatorially many styles? We leverage programmatic labeling functions to specify controllable styles, and derive a formal notion of style-consistency as a learning objective, which can then be solved using conventional policy learning approaches. We evaluate our framework using demonstrations from professional basketball players and agents in the MuJoCo physics environment, and show that existing approaches that do not explicitly enforce style-consistency fail to generate diverse behaviors whereas our learned policies can be calibrated for up to $4^5 (1024)$ distinct style combinations.} }
Endnote
%0 Conference Paper %T Learning Calibratable Policies using Programmatic Style-Consistency %A Eric Zhan %A Albert Tseng %A Yisong Yue %A Adith Swaminathan %A Matthew Hausknecht %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-zhan20a %I PMLR %P 11001--11011 %U https://proceedings.mlr.press/v119/zhan20a.html %V 119 %X We study the problem of controllable generation of long-term sequential behaviors, where the goal is to calibrate to multiple behavior styles simultaneously. In contrast to the well-studied areas of controllable generation of images, text, and speech, there are two questions that pose significant challenges when generating long-term behaviors: how should we specify the factors of variation to control, and how can we ensure that the generated behavior faithfully demonstrates combinatorially many styles? We leverage programmatic labeling functions to specify controllable styles, and derive a formal notion of style-consistency as a learning objective, which can then be solved using conventional policy learning approaches. We evaluate our framework using demonstrations from professional basketball players and agents in the MuJoCo physics environment, and show that existing approaches that do not explicitly enforce style-consistency fail to generate diverse behaviors whereas our learned policies can be calibrated for up to $4^5 (1024)$ distinct style combinations.
APA
Zhan, E., Tseng, A., Yue, Y., Swaminathan, A. & Hausknecht, M.. (2020). Learning Calibratable Policies using Programmatic Style-Consistency. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:11001-11011 Available from https://proceedings.mlr.press/v119/zhan20a.html.

Related Material