Robust Multi-objective Learning with Mentor Feedback

Alekh Agarwal, Ashwinkumar Badanidiyuru, Miroslav Dudík, Robert E. Schapire, Aleksandrs Slivkins
Proceedings of The 27th Conference on Learning Theory, PMLR 35:726-741, 2014.

Abstract

We study decision making when each action is described by a set of objectives, all of which are to be maximized. During the training phase, we have access to the actions of an outside agent (“mentor”). In the test phase, our goal is to maximally improve upon the mentor’s (unobserved) actions across all objectives. We present an algorithm with a vanishing regret compared with the optimal possible improvement, and show that our regret bound is the best possible. The bound is independent of the number of actions, and scales only as the logarithm of the number of objectives.

Cite this Paper


BibTeX
@InProceedings{pmlr-v35-agarwal14b, title = {Robust Multi-objective Learning with Mentor Feedback}, author = {Agarwal, Alekh and Badanidiyuru, Ashwinkumar and Dudík, Miroslav and Schapire, Robert E. and Slivkins, Aleksandrs}, booktitle = {Proceedings of The 27th Conference on Learning Theory}, pages = {726--741}, year = {2014}, editor = {Balcan, Maria Florina and Feldman, Vitaly and Szepesvári, Csaba}, volume = {35}, series = {Proceedings of Machine Learning Research}, address = {Barcelona, Spain}, month = {13--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v35/agarwal14b.pdf}, url = {https://proceedings.mlr.press/v35/agarwal14b.html}, abstract = {We study decision making when each action is described by a set of objectives, all of which are to be maximized. During the training phase, we have access to the actions of an outside agent (“mentor”). In the test phase, our goal is to maximally improve upon the mentor’s (unobserved) actions across all objectives. We present an algorithm with a vanishing regret compared with the optimal possible improvement, and show that our regret bound is the best possible. The bound is independent of the number of actions, and scales only as the logarithm of the number of objectives.} }
Endnote
%0 Conference Paper %T Robust Multi-objective Learning with Mentor Feedback %A Alekh Agarwal %A Ashwinkumar Badanidiyuru %A Miroslav Dudík %A Robert E. Schapire %A Aleksandrs Slivkins %B Proceedings of The 27th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2014 %E Maria Florina Balcan %E Vitaly Feldman %E Csaba Szepesvári %F pmlr-v35-agarwal14b %I PMLR %P 726--741 %U https://proceedings.mlr.press/v35/agarwal14b.html %V 35 %X We study decision making when each action is described by a set of objectives, all of which are to be maximized. During the training phase, we have access to the actions of an outside agent (“mentor”). In the test phase, our goal is to maximally improve upon the mentor’s (unobserved) actions across all objectives. We present an algorithm with a vanishing regret compared with the optimal possible improvement, and show that our regret bound is the best possible. The bound is independent of the number of actions, and scales only as the logarithm of the number of objectives.
RIS
TY - CPAPER TI - Robust Multi-objective Learning with Mentor Feedback AU - Alekh Agarwal AU - Ashwinkumar Badanidiyuru AU - Miroslav Dudík AU - Robert E. Schapire AU - Aleksandrs Slivkins BT - Proceedings of The 27th Conference on Learning Theory DA - 2014/05/29 ED - Maria Florina Balcan ED - Vitaly Feldman ED - Csaba Szepesvári ID - pmlr-v35-agarwal14b PB - PMLR DP - Proceedings of Machine Learning Research VL - 35 SP - 726 EP - 741 L1 - http://proceedings.mlr.press/v35/agarwal14b.pdf UR - https://proceedings.mlr.press/v35/agarwal14b.html AB - We study decision making when each action is described by a set of objectives, all of which are to be maximized. During the training phase, we have access to the actions of an outside agent (“mentor”). In the test phase, our goal is to maximally improve upon the mentor’s (unobserved) actions across all objectives. We present an algorithm with a vanishing regret compared with the optimal possible improvement, and show that our regret bound is the best possible. The bound is independent of the number of actions, and scales only as the logarithm of the number of objectives. ER -
APA
Agarwal, A., Badanidiyuru, A., Dudík, M., Schapire, R.E. & Slivkins, A.. (2014). Robust Multi-objective Learning with Mentor Feedback. Proceedings of The 27th Conference on Learning Theory, in Proceedings of Machine Learning Research 35:726-741 Available from https://proceedings.mlr.press/v35/agarwal14b.html.

Related Material