On-Line Learning Algorithms for Path Experts with Non-Additive Losses

Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Manfred Warmuth
; Proceedings of The 28th Conference on Learning Theory, PMLR 40:424-447, 2015.

Abstract

We consider two broad families of non-additive loss functions covering a large number of applications: rational losses and tropical losses. We give new algorithms extending the Follow-the-Perturbed-Leader (FPL) algorithm to both of these families of loss functions and similarly give new algorithms extending the Randomized Weighted Majority (RWM) algorithm to both of these families. We prove that the time complexity of our extensions to rational losses of both FPL and RWM is polynomial and present regret bounds for both. We further show that these algorithms can play a critical role in improving performance in applications such as structured prediction.

Cite this Paper


BibTeX
@InProceedings{pmlr-v40-Cortes15, title = {On-Line Learning Algorithms for Path Experts with Non-Additive Losses}, author = {Corinna Cortes and Vitaly Kuznetsov and Mehryar Mohri and Manfred Warmuth}, booktitle = {Proceedings of The 28th Conference on Learning Theory}, pages = {424--447}, year = {2015}, editor = {Peter Grünwald and Elad Hazan and Satyen Kale}, volume = {40}, series = {Proceedings of Machine Learning Research}, address = {Paris, France}, month = {03--06 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v40/Cortes15.pdf}, url = {http://proceedings.mlr.press/v40/Cortes15.html}, abstract = {We consider two broad families of non-additive loss functions covering a large number of applications: rational losses and tropical losses. We give new algorithms extending the Follow-the-Perturbed-Leader (FPL) algorithm to both of these families of loss functions and similarly give new algorithms extending the Randomized Weighted Majority (RWM) algorithm to both of these families. We prove that the time complexity of our extensions to rational losses of both FPL and RWM is polynomial and present regret bounds for both. We further show that these algorithms can play a critical role in improving performance in applications such as structured prediction. } }
Endnote
%0 Conference Paper %T On-Line Learning Algorithms for Path Experts with Non-Additive Losses %A Corinna Cortes %A Vitaly Kuznetsov %A Mehryar Mohri %A Manfred Warmuth %B Proceedings of The 28th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2015 %E Peter Grünwald %E Elad Hazan %E Satyen Kale %F pmlr-v40-Cortes15 %I PMLR %J Proceedings of Machine Learning Research %P 424--447 %U http://proceedings.mlr.press %V 40 %W PMLR %X We consider two broad families of non-additive loss functions covering a large number of applications: rational losses and tropical losses. We give new algorithms extending the Follow-the-Perturbed-Leader (FPL) algorithm to both of these families of loss functions and similarly give new algorithms extending the Randomized Weighted Majority (RWM) algorithm to both of these families. We prove that the time complexity of our extensions to rational losses of both FPL and RWM is polynomial and present regret bounds for both. We further show that these algorithms can play a critical role in improving performance in applications such as structured prediction.
RIS
TY - CPAPER TI - On-Line Learning Algorithms for Path Experts with Non-Additive Losses AU - Corinna Cortes AU - Vitaly Kuznetsov AU - Mehryar Mohri AU - Manfred Warmuth BT - Proceedings of The 28th Conference on Learning Theory PY - 2015/06/26 DA - 2015/06/26 ED - Peter Grünwald ED - Elad Hazan ED - Satyen Kale ID - pmlr-v40-Cortes15 PB - PMLR SP - 424 DP - PMLR EP - 447 L1 - http://proceedings.mlr.press/v40/Cortes15.pdf UR - http://proceedings.mlr.press/v40/Cortes15.html AB - We consider two broad families of non-additive loss functions covering a large number of applications: rational losses and tropical losses. We give new algorithms extending the Follow-the-Perturbed-Leader (FPL) algorithm to both of these families of loss functions and similarly give new algorithms extending the Randomized Weighted Majority (RWM) algorithm to both of these families. We prove that the time complexity of our extensions to rational losses of both FPL and RWM is polynomial and present regret bounds for both. We further show that these algorithms can play a critical role in improving performance in applications such as structured prediction. ER -
APA
Cortes, C., Kuznetsov, V., Mohri, M. & Warmuth, M.. (2015). On-Line Learning Algorithms for Path Experts with Non-Additive Losses. Proceedings of The 28th Conference on Learning Theory, in PMLR 40:424-447

Related Material