Learning Theory for Conditional Risk Minimization

Alexander Zimin, Christoph Lampert
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:213-222, 2017.

Abstract

In this work we study the learnability of stochastic processes with respect to the conditional risk, i.e. the existence of a learning algorithm that improves its next-step performance with the amount of observed data. We introduce a notion of pairwise discrepancy between conditional distributions at different times steps and show how certain properties of these discrepancies can be used to construct a successful learning algorithm. Our main results are two theorems that establish criteria for learnability for many classes of stochastic processes, including all special cases studied previously in the literature.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-zimin17a, title = {{Learning Theory for Conditional Risk Minimization}}, author = {Zimin, Alexander and Lampert, Christoph}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {213--222}, year = {2017}, editor = {Singh, Aarti and Zhu, Jerry}, volume = {54}, series = {Proceedings of Machine Learning Research}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/zimin17a/zimin17a.pdf}, url = {https://proceedings.mlr.press/v54/zimin17a.html}, abstract = {In this work we study the learnability of stochastic processes with respect to the conditional risk, i.e. the existence of a learning algorithm that improves its next-step performance with the amount of observed data. We introduce a notion of pairwise discrepancy between conditional distributions at different times steps and show how certain properties of these discrepancies can be used to construct a successful learning algorithm. Our main results are two theorems that establish criteria for learnability for many classes of stochastic processes, including all special cases studied previously in the literature. } }
Endnote
%0 Conference Paper %T Learning Theory for Conditional Risk Minimization %A Alexander Zimin %A Christoph Lampert %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-zimin17a %I PMLR %P 213--222 %U https://proceedings.mlr.press/v54/zimin17a.html %V 54 %X In this work we study the learnability of stochastic processes with respect to the conditional risk, i.e. the existence of a learning algorithm that improves its next-step performance with the amount of observed data. We introduce a notion of pairwise discrepancy between conditional distributions at different times steps and show how certain properties of these discrepancies can be used to construct a successful learning algorithm. Our main results are two theorems that establish criteria for learnability for many classes of stochastic processes, including all special cases studied previously in the literature.
APA
Zimin, A. & Lampert, C.. (2017). Learning Theory for Conditional Risk Minimization. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 54:213-222 Available from https://proceedings.mlr.press/v54/zimin17a.html.

Related Material