Passive Learning with Target Risk

Mehrdad Mahdavi, Rong Jin
Proceedings of the 26th Annual Conference on Learning Theory, PMLR 30:252-269, 2013.

Abstract

In this paper we consider learning in passive setting but with a slight modification. We assume that the target expected loss, also referred to as target risk, is provided in advance for learner as prior knowledge. Unlike most studies in the learning theory that only incorporate the prior knowledge into the generalization bounds, we are able to explicitly utilize the target risk in the learning process. Our analysis reveals a surprising result on the sample complexity of learning: by exploiting the target risk in the learning algorithm, we show that when the loss function is both strongly convex and smooth, the sample complexity reduces to \mathcalO(\log \left(\frac1ε\right)), an exponential improvement compared to the sample complexity \mathcalO(\frac1ε) for learning with strongly convex loss functions. Furthermore, our proof is constructive and is based on a computationally efficient stochastic optimization algorithm for such settings which demonstrate that the proposed algorithm is practically useful.

Cite this Paper


BibTeX
@InProceedings{pmlr-v30-Mahdavi13, title = {Passive Learning with Target Risk}, author = {Mahdavi, Mehrdad and Jin, Rong}, booktitle = {Proceedings of the 26th Annual Conference on Learning Theory}, pages = {252--269}, year = {2013}, editor = {Shalev-Shwartz, Shai and Steinwart, Ingo}, volume = {30}, series = {Proceedings of Machine Learning Research}, address = {Princeton, NJ, USA}, month = {12--14 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v30/Mahdavi13.pdf}, url = {https://proceedings.mlr.press/v30/Mahdavi13.html}, abstract = {In this paper we consider learning in passive setting but with a slight modification. We assume that the target expected loss, also referred to as target risk, is provided in advance for learner as prior knowledge. Unlike most studies in the learning theory that only incorporate the prior knowledge into the generalization bounds, we are able to explicitly utilize the target risk in the learning process. Our analysis reveals a surprising result on the sample complexity of learning: by exploiting the target risk in the learning algorithm, we show that when the loss function is both strongly convex and smooth, the sample complexity reduces to \mathcalO(\log \left(\frac1ε\right)), an exponential improvement compared to the sample complexity \mathcalO(\frac1ε) for learning with strongly convex loss functions. Furthermore, our proof is constructive and is based on a computationally efficient stochastic optimization algorithm for such settings which demonstrate that the proposed algorithm is practically useful.} }
Endnote
%0 Conference Paper %T Passive Learning with Target Risk %A Mehrdad Mahdavi %A Rong Jin %B Proceedings of the 26th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2013 %E Shai Shalev-Shwartz %E Ingo Steinwart %F pmlr-v30-Mahdavi13 %I PMLR %P 252--269 %U https://proceedings.mlr.press/v30/Mahdavi13.html %V 30 %X In this paper we consider learning in passive setting but with a slight modification. We assume that the target expected loss, also referred to as target risk, is provided in advance for learner as prior knowledge. Unlike most studies in the learning theory that only incorporate the prior knowledge into the generalization bounds, we are able to explicitly utilize the target risk in the learning process. Our analysis reveals a surprising result on the sample complexity of learning: by exploiting the target risk in the learning algorithm, we show that when the loss function is both strongly convex and smooth, the sample complexity reduces to \mathcalO(\log \left(\frac1ε\right)), an exponential improvement compared to the sample complexity \mathcalO(\frac1ε) for learning with strongly convex loss functions. Furthermore, our proof is constructive and is based on a computationally efficient stochastic optimization algorithm for such settings which demonstrate that the proposed algorithm is practically useful.
RIS
TY - CPAPER TI - Passive Learning with Target Risk AU - Mehrdad Mahdavi AU - Rong Jin BT - Proceedings of the 26th Annual Conference on Learning Theory DA - 2013/06/13 ED - Shai Shalev-Shwartz ED - Ingo Steinwart ID - pmlr-v30-Mahdavi13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 30 SP - 252 EP - 269 L1 - http://proceedings.mlr.press/v30/Mahdavi13.pdf UR - https://proceedings.mlr.press/v30/Mahdavi13.html AB - In this paper we consider learning in passive setting but with a slight modification. We assume that the target expected loss, also referred to as target risk, is provided in advance for learner as prior knowledge. Unlike most studies in the learning theory that only incorporate the prior knowledge into the generalization bounds, we are able to explicitly utilize the target risk in the learning process. Our analysis reveals a surprising result on the sample complexity of learning: by exploiting the target risk in the learning algorithm, we show that when the loss function is both strongly convex and smooth, the sample complexity reduces to \mathcalO(\log \left(\frac1ε\right)), an exponential improvement compared to the sample complexity \mathcalO(\frac1ε) for learning with strongly convex loss functions. Furthermore, our proof is constructive and is based on a computationally efficient stochastic optimization algorithm for such settings which demonstrate that the proposed algorithm is practically useful. ER -
APA
Mahdavi, M. & Jin, R.. (2013). Passive Learning with Target Risk. Proceedings of the 26th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 30:252-269 Available from https://proceedings.mlr.press/v30/Mahdavi13.html.

Related Material