MaxHedge: Maximizing a Maximum Online

Stephen Pasteris, Fabio Vitale, Kevin Chan, Shiqiang Wang, Mark Herbster
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:1851-1859, 2019.

Abstract

We introduce a new online learning framework where, at each trial, the learner is required to select a subset of actions from a given known action set. Each action is associated with an energy value, a reward and a cost. The sum of the energies of the actions selected cannot exceed a given energy budget. The goal is to maximise the cumulative profit, where the profit obtained on a single trial is defined as the difference between the maximum reward among the selected actions and the sum of their costs. Action energy values and the budget are known and fixed. All rewards and costs associated with each action change over time and are revealed at each trial only after the learner’s selection of actions. Our framework encompasses several online learning problems where the environment changes over time; and the solution trades-off between minimising the costs and maximising the maximum reward of the selected subset of actions, while being constrained to an action energy budget. The algorithm that we propose is efficient and general that may be specialised to multiple natural online combinatorial problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v89-pasteris19a, title = {MaxHedge: Maximizing a Maximum Online}, author = {Pasteris, Stephen and Vitale, Fabio and Chan, Kevin and Wang, Shiqiang and Herbster, Mark}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics}, pages = {1851--1859}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/pasteris19a/pasteris19a.pdf}, url = {https://proceedings.mlr.press/v89/pasteris19a.html}, abstract = {We introduce a new online learning framework where, at each trial, the learner is required to select a subset of actions from a given known action set. Each action is associated with an energy value, a reward and a cost. The sum of the energies of the actions selected cannot exceed a given energy budget. The goal is to maximise the cumulative profit, where the profit obtained on a single trial is defined as the difference between the maximum reward among the selected actions and the sum of their costs. Action energy values and the budget are known and fixed. All rewards and costs associated with each action change over time and are revealed at each trial only after the learner’s selection of actions. Our framework encompasses several online learning problems where the environment changes over time; and the solution trades-off between minimising the costs and maximising the maximum reward of the selected subset of actions, while being constrained to an action energy budget. The algorithm that we propose is efficient and general that may be specialised to multiple natural online combinatorial problems.} }
Endnote
%0 Conference Paper %T MaxHedge: Maximizing a Maximum Online %A Stephen Pasteris %A Fabio Vitale %A Kevin Chan %A Shiqiang Wang %A Mark Herbster %B Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Masashi Sugiyama %F pmlr-v89-pasteris19a %I PMLR %P 1851--1859 %U https://proceedings.mlr.press/v89/pasteris19a.html %V 89 %X We introduce a new online learning framework where, at each trial, the learner is required to select a subset of actions from a given known action set. Each action is associated with an energy value, a reward and a cost. The sum of the energies of the actions selected cannot exceed a given energy budget. The goal is to maximise the cumulative profit, where the profit obtained on a single trial is defined as the difference between the maximum reward among the selected actions and the sum of their costs. Action energy values and the budget are known and fixed. All rewards and costs associated with each action change over time and are revealed at each trial only after the learner’s selection of actions. Our framework encompasses several online learning problems where the environment changes over time; and the solution trades-off between minimising the costs and maximising the maximum reward of the selected subset of actions, while being constrained to an action energy budget. The algorithm that we propose is efficient and general that may be specialised to multiple natural online combinatorial problems.
APA
Pasteris, S., Vitale, F., Chan, K., Wang, S. & Herbster, M.. (2019). MaxHedge: Maximizing a Maximum Online. Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 89:1851-1859 Available from https://proceedings.mlr.press/v89/pasteris19a.html.

Related Material