Complete statistical theory of learning: learning using statistical invariants

Vladimir Vapnik, Rauf Izmailov
Proceedings of the Ninth Symposium on Conformal and Probabilistic Prediction and Applications, PMLR 128:4-40, 2020.

Abstract

Statistical theory of learning considers methods of constructing approximations that converge to the desired function with increasing number of observations. This theory studies mechanisms that provide convergence in the space of functions in $L_2$ norm, i.e., it studies the so-called strong mode of convergence. However, in Hilbert space, along with the convergence in the space of functions, there also exists the so-called weak mode of convergence, i.e., convergence in the space of functionals. Under some conditions, this weak mode of convergence also implies the convergence of approximations to the desired function in $L_2$ norm, although such convergence is based on other mechanisms. The paper discusses new learning methods which use both modes of convergence (weak and strong) simultaneously. Such methods allow one to execute the following: (1) select an admissible subset of functions (i.e., the set of appropriate approximation functions), and (2) find the desired approximation in this admissible subset. Since only two modes of convergence exist in Hilbert space, we call the theory that uses both modes the complete statistical theory of learning. Along with general reasoning, we describe new learning algorithms referred to as Learning Using Statistical Invariants (LUSI). LUSI algorithms were developed for sets of functions belonging to Reproducing Kernel Hilbert Space (RKHS); they include the modified SVM method (LUSI-SVM method). Also, the paper presents a LUSI modification of Neural Networks (LUSI-NN). LUSI methods require fewer training examples that standard approaches for achieving the same performance. In conclusion, the paper discusses the general (philosophical) framework of a new learn- ing paradigm that includes the concept of intelligence.

Cite this Paper


BibTeX
@InProceedings{pmlr-v128-vapnik20a, title = {Complete statistical theory of learning: learning using statistical invariants}, author = {Vapnik, Vladimir and Izmailov, Rauf}, booktitle = {Proceedings of the Ninth Symposium on Conformal and Probabilistic Prediction and Applications}, pages = {4--40}, year = {2020}, editor = {Gammerman, Alexander and Vovk, Vladimir and Luo, Zhiyuan and Smirnov, Evgueni and Cherubin, Giovanni}, volume = {128}, series = {Proceedings of Machine Learning Research}, month = {09--11 Sep}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v128/vapnik20a/vapnik20a.pdf}, url = {https://proceedings.mlr.press/v128/vapnik20a.html}, abstract = {Statistical theory of learning considers methods of constructing approximations that converge to the desired function with increasing number of observations. This theory studies mechanisms that provide convergence in the space of functions in $L_2$ norm, i.e., it studies the so-called strong mode of convergence. However, in Hilbert space, along with the convergence in the space of functions, there also exists the so-called weak mode of convergence, i.e., convergence in the space of functionals. Under some conditions, this weak mode of convergence also implies the convergence of approximations to the desired function in $L_2$ norm, although such convergence is based on other mechanisms. The paper discusses new learning methods which use both modes of convergence (weak and strong) simultaneously. Such methods allow one to execute the following: (1) select an admissible subset of functions (i.e., the set of appropriate approximation functions), and (2) find the desired approximation in this admissible subset. Since only two modes of convergence exist in Hilbert space, we call the theory that uses both modes the complete statistical theory of learning. Along with general reasoning, we describe new learning algorithms referred to as Learning Using Statistical Invariants (LUSI). LUSI algorithms were developed for sets of functions belonging to Reproducing Kernel Hilbert Space (RKHS); they include the modified SVM method (LUSI-SVM method). Also, the paper presents a LUSI modification of Neural Networks (LUSI-NN). LUSI methods require fewer training examples that standard approaches for achieving the same performance. In conclusion, the paper discusses the general (philosophical) framework of a new learn- ing paradigm that includes the concept of intelligence.} }
Endnote
%0 Conference Paper %T Complete statistical theory of learning: learning using statistical invariants %A Vladimir Vapnik %A Rauf Izmailov %B Proceedings of the Ninth Symposium on Conformal and Probabilistic Prediction and Applications %C Proceedings of Machine Learning Research %D 2020 %E Alexander Gammerman %E Vladimir Vovk %E Zhiyuan Luo %E Evgueni Smirnov %E Giovanni Cherubin %F pmlr-v128-vapnik20a %I PMLR %P 4--40 %U https://proceedings.mlr.press/v128/vapnik20a.html %V 128 %X Statistical theory of learning considers methods of constructing approximations that converge to the desired function with increasing number of observations. This theory studies mechanisms that provide convergence in the space of functions in $L_2$ norm, i.e., it studies the so-called strong mode of convergence. However, in Hilbert space, along with the convergence in the space of functions, there also exists the so-called weak mode of convergence, i.e., convergence in the space of functionals. Under some conditions, this weak mode of convergence also implies the convergence of approximations to the desired function in $L_2$ norm, although such convergence is based on other mechanisms. The paper discusses new learning methods which use both modes of convergence (weak and strong) simultaneously. Such methods allow one to execute the following: (1) select an admissible subset of functions (i.e., the set of appropriate approximation functions), and (2) find the desired approximation in this admissible subset. Since only two modes of convergence exist in Hilbert space, we call the theory that uses both modes the complete statistical theory of learning. Along with general reasoning, we describe new learning algorithms referred to as Learning Using Statistical Invariants (LUSI). LUSI algorithms were developed for sets of functions belonging to Reproducing Kernel Hilbert Space (RKHS); they include the modified SVM method (LUSI-SVM method). Also, the paper presents a LUSI modification of Neural Networks (LUSI-NN). LUSI methods require fewer training examples that standard approaches for achieving the same performance. In conclusion, the paper discusses the general (philosophical) framework of a new learn- ing paradigm that includes the concept of intelligence.
APA
Vapnik, V. & Izmailov, R.. (2020). Complete statistical theory of learning: learning using statistical invariants. Proceedings of the Ninth Symposium on Conformal and Probabilistic Prediction and Applications, in Proceedings of Machine Learning Research 128:4-40 Available from https://proceedings.mlr.press/v128/vapnik20a.html.

Related Material