Exp-Concavity of Proper Composite Losses

Parameswaran Kamalaruban, Robert Williamson, Xinhua Zhang
Proceedings of The 28th Conference on Learning Theory, PMLR 40:1035-1065, 2015.

Abstract

The goal of online prediction with expert advice is to find a decision strategy which will perform almost as well as the best expert in a given pool of experts, on any sequence of outcomes. This problem has been widely studied and O(\sqrtT) and O(\logT) regret bounds can be achieved for convex losses and strictly convex losses with bounded first and second derivatives respectively. In special cases like the Aggregating Algorithm with mixable losses and the Weighted Average Algorithm with exp-concave losses, it is possible to achieve O(1) regret bounds. But mixability and exp-concavity are roughly equivalent under certain conditions. Thus by understanding the underlying relationship between these two notions we can gain the best of both algorithms (strong theoretical performance guarantees of the Aggregating Algorithm and the computational efficiency of the Weighted Average Algorithm). In this paper we provide a complete characterization of the exp-concavity of any proper composite loss. Using this characterization and the mixability condition of proper losses, we show that it is possible to transform (re-parameterize) any β-mixable binary proper loss into a β-exp-concave composite loss with the same β. In the multi-class case, we propose an approximation approach for this transformation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v40-Kamalaruban15, title = {Exp-Concavity of Proper Composite Losses}, author = {Kamalaruban, Parameswaran and Williamson, Robert and Zhang, Xinhua}, booktitle = {Proceedings of The 28th Conference on Learning Theory}, pages = {1035--1065}, year = {2015}, editor = {Grünwald, Peter and Hazan, Elad and Kale, Satyen}, volume = {40}, series = {Proceedings of Machine Learning Research}, address = {Paris, France}, month = {03--06 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v40/Kamalaruban15.pdf}, url = {https://proceedings.mlr.press/v40/Kamalaruban15.html}, abstract = {The goal of online prediction with expert advice is to find a decision strategy which will perform almost as well as the best expert in a given pool of experts, on any sequence of outcomes. This problem has been widely studied and O(\sqrtT) and O(\logT) regret bounds can be achieved for convex losses and strictly convex losses with bounded first and second derivatives respectively. In special cases like the Aggregating Algorithm with mixable losses and the Weighted Average Algorithm with exp-concave losses, it is possible to achieve O(1) regret bounds. But mixability and exp-concavity are roughly equivalent under certain conditions. Thus by understanding the underlying relationship between these two notions we can gain the best of both algorithms (strong theoretical performance guarantees of the Aggregating Algorithm and the computational efficiency of the Weighted Average Algorithm). In this paper we provide a complete characterization of the exp-concavity of any proper composite loss. Using this characterization and the mixability condition of proper losses, we show that it is possible to transform (re-parameterize) any β-mixable binary proper loss into a β-exp-concave composite loss with the same β. In the multi-class case, we propose an approximation approach for this transformation.} }
Endnote
%0 Conference Paper %T Exp-Concavity of Proper Composite Losses %A Parameswaran Kamalaruban %A Robert Williamson %A Xinhua Zhang %B Proceedings of The 28th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2015 %E Peter Grünwald %E Elad Hazan %E Satyen Kale %F pmlr-v40-Kamalaruban15 %I PMLR %P 1035--1065 %U https://proceedings.mlr.press/v40/Kamalaruban15.html %V 40 %X The goal of online prediction with expert advice is to find a decision strategy which will perform almost as well as the best expert in a given pool of experts, on any sequence of outcomes. This problem has been widely studied and O(\sqrtT) and O(\logT) regret bounds can be achieved for convex losses and strictly convex losses with bounded first and second derivatives respectively. In special cases like the Aggregating Algorithm with mixable losses and the Weighted Average Algorithm with exp-concave losses, it is possible to achieve O(1) regret bounds. But mixability and exp-concavity are roughly equivalent under certain conditions. Thus by understanding the underlying relationship between these two notions we can gain the best of both algorithms (strong theoretical performance guarantees of the Aggregating Algorithm and the computational efficiency of the Weighted Average Algorithm). In this paper we provide a complete characterization of the exp-concavity of any proper composite loss. Using this characterization and the mixability condition of proper losses, we show that it is possible to transform (re-parameterize) any β-mixable binary proper loss into a β-exp-concave composite loss with the same β. In the multi-class case, we propose an approximation approach for this transformation.
RIS
TY - CPAPER TI - Exp-Concavity of Proper Composite Losses AU - Parameswaran Kamalaruban AU - Robert Williamson AU - Xinhua Zhang BT - Proceedings of The 28th Conference on Learning Theory DA - 2015/06/26 ED - Peter Grünwald ED - Elad Hazan ED - Satyen Kale ID - pmlr-v40-Kamalaruban15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 40 SP - 1035 EP - 1065 L1 - http://proceedings.mlr.press/v40/Kamalaruban15.pdf UR - https://proceedings.mlr.press/v40/Kamalaruban15.html AB - The goal of online prediction with expert advice is to find a decision strategy which will perform almost as well as the best expert in a given pool of experts, on any sequence of outcomes. This problem has been widely studied and O(\sqrtT) and O(\logT) regret bounds can be achieved for convex losses and strictly convex losses with bounded first and second derivatives respectively. In special cases like the Aggregating Algorithm with mixable losses and the Weighted Average Algorithm with exp-concave losses, it is possible to achieve O(1) regret bounds. But mixability and exp-concavity are roughly equivalent under certain conditions. Thus by understanding the underlying relationship between these two notions we can gain the best of both algorithms (strong theoretical performance guarantees of the Aggregating Algorithm and the computational efficiency of the Weighted Average Algorithm). In this paper we provide a complete characterization of the exp-concavity of any proper composite loss. Using this characterization and the mixability condition of proper losses, we show that it is possible to transform (re-parameterize) any β-mixable binary proper loss into a β-exp-concave composite loss with the same β. In the multi-class case, we propose an approximation approach for this transformation. ER -
APA
Kamalaruban, P., Williamson, R. & Zhang, X.. (2015). Exp-Concavity of Proper Composite Losses. Proceedings of The 28th Conference on Learning Theory, in Proceedings of Machine Learning Research 40:1035-1065 Available from https://proceedings.mlr.press/v40/Kamalaruban15.html.

Related Material