Learning from Survey Training Samples: Rate Bounds for Horvitz-Thompson Risk Minimizers

Stephan Clemencon, Patrice Bertail, Guillaume Papa
Proceedings of The 8th Asian Conference on Machine Learning, PMLR 63:142-157, 2016.

Abstract

The generalization ability of minimizers of the empirical risk in the context of binary classification has been investigated under a wide variety of complexity assumptions for the collection of classifiers over which optimization is performed. In contrast, the vast majority of the works dedicated to this issue stipulate that the training dataset used to compute the empirical risk functional is composed of i.i.d. observations and involve sharp control of uniform deviation of i.i.d. averages from their expectation. Beyond the cases where training data are drawn uniformly without replacement among a large i.i.d. sample or modelled as a realization of a weakly dependent sequence of r.v.’s, statistical guarantees when the data used to train a classifier are drawn by means of a more general sampling/survey scheme and exhibit a complex dependence structure have not been documented in the literature yet. It is the main purpose of this paper to show that the theory of empirical risk minimization can be extended to situations where statistical learning is based on survey samples and knowledge of the related (first order) inclusion probabilities. Precisely, we prove that minimizing a (possibly biased) weighted version of the empirical risk, refered to as the (approximate) Horvitz-Thompson risk (HT risk), over a class of controlled complexity lead to a rate for the excess risk of the order O_\mathbbP((\kappa_N (\log N)/n)^1/2) with \kappa_N=(n/N)/\min_i≤N\pi_i, when data are sampled by means of a rejective scheme of (deterministic) size n within a statistical population of cardinality N≥n, a generalization of basic \it sampling without replacement with unequal probability weights \pi_i > 0. Extension to other sampling schemes are then established by a coupling argument. Beyond theoretical results, numerical experiments are displayed in order to show the relevance of HT risk minimization and that ignoring the sampling scheme used to generate the training dataset may completely jeopardize the learning procedure.

Cite this Paper


BibTeX
@InProceedings{pmlr-v63-clemencon64, title = {Learning from Survey Training Samples: Rate Bounds for Horvitz-Thompson Risk Minimizers}, author = {Clemencon, Stephan and Bertail, Patrice and Papa, Guillaume}, booktitle = {Proceedings of The 8th Asian Conference on Machine Learning}, pages = {142--157}, year = {2016}, editor = {Durrant, Robert J. and Kim, Kee-Eung}, volume = {63}, series = {Proceedings of Machine Learning Research}, address = {The University of Waikato, Hamilton, New Zealand}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v63/clemencon64.pdf}, url = {https://proceedings.mlr.press/v63/clemencon64.html}, abstract = {The generalization ability of minimizers of the empirical risk in the context of binary classification has been investigated under a wide variety of complexity assumptions for the collection of classifiers over which optimization is performed. In contrast, the vast majority of the works dedicated to this issue stipulate that the training dataset used to compute the empirical risk functional is composed of i.i.d. observations and involve sharp control of uniform deviation of i.i.d. averages from their expectation. Beyond the cases where training data are drawn uniformly without replacement among a large i.i.d. sample or modelled as a realization of a weakly dependent sequence of r.v.’s, statistical guarantees when the data used to train a classifier are drawn by means of a more general sampling/survey scheme and exhibit a complex dependence structure have not been documented in the literature yet. It is the main purpose of this paper to show that the theory of empirical risk minimization can be extended to situations where statistical learning is based on survey samples and knowledge of the related (first order) inclusion probabilities. Precisely, we prove that minimizing a (possibly biased) weighted version of the empirical risk, refered to as the (approximate) Horvitz-Thompson risk (HT risk), over a class of controlled complexity lead to a rate for the excess risk of the order O_\mathbbP((\kappa_N (\log N)/n)^1/2) with \kappa_N=(n/N)/\min_i≤N\pi_i, when data are sampled by means of a rejective scheme of (deterministic) size n within a statistical population of cardinality N≥n, a generalization of basic \it sampling without replacement with unequal probability weights \pi_i > 0. Extension to other sampling schemes are then established by a coupling argument. Beyond theoretical results, numerical experiments are displayed in order to show the relevance of HT risk minimization and that ignoring the sampling scheme used to generate the training dataset may completely jeopardize the learning procedure.} }
Endnote
%0 Conference Paper %T Learning from Survey Training Samples: Rate Bounds for Horvitz-Thompson Risk Minimizers %A Stephan Clemencon %A Patrice Bertail %A Guillaume Papa %B Proceedings of The 8th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Robert J. Durrant %E Kee-Eung Kim %F pmlr-v63-clemencon64 %I PMLR %P 142--157 %U https://proceedings.mlr.press/v63/clemencon64.html %V 63 %X The generalization ability of minimizers of the empirical risk in the context of binary classification has been investigated under a wide variety of complexity assumptions for the collection of classifiers over which optimization is performed. In contrast, the vast majority of the works dedicated to this issue stipulate that the training dataset used to compute the empirical risk functional is composed of i.i.d. observations and involve sharp control of uniform deviation of i.i.d. averages from their expectation. Beyond the cases where training data are drawn uniformly without replacement among a large i.i.d. sample or modelled as a realization of a weakly dependent sequence of r.v.’s, statistical guarantees when the data used to train a classifier are drawn by means of a more general sampling/survey scheme and exhibit a complex dependence structure have not been documented in the literature yet. It is the main purpose of this paper to show that the theory of empirical risk minimization can be extended to situations where statistical learning is based on survey samples and knowledge of the related (first order) inclusion probabilities. Precisely, we prove that minimizing a (possibly biased) weighted version of the empirical risk, refered to as the (approximate) Horvitz-Thompson risk (HT risk), over a class of controlled complexity lead to a rate for the excess risk of the order O_\mathbbP((\kappa_N (\log N)/n)^1/2) with \kappa_N=(n/N)/\min_i≤N\pi_i, when data are sampled by means of a rejective scheme of (deterministic) size n within a statistical population of cardinality N≥n, a generalization of basic \it sampling without replacement with unequal probability weights \pi_i > 0. Extension to other sampling schemes are then established by a coupling argument. Beyond theoretical results, numerical experiments are displayed in order to show the relevance of HT risk minimization and that ignoring the sampling scheme used to generate the training dataset may completely jeopardize the learning procedure.
RIS
TY - CPAPER TI - Learning from Survey Training Samples: Rate Bounds for Horvitz-Thompson Risk Minimizers AU - Stephan Clemencon AU - Patrice Bertail AU - Guillaume Papa BT - Proceedings of The 8th Asian Conference on Machine Learning DA - 2016/11/20 ED - Robert J. Durrant ED - Kee-Eung Kim ID - pmlr-v63-clemencon64 PB - PMLR DP - Proceedings of Machine Learning Research VL - 63 SP - 142 EP - 157 L1 - http://proceedings.mlr.press/v63/clemencon64.pdf UR - https://proceedings.mlr.press/v63/clemencon64.html AB - The generalization ability of minimizers of the empirical risk in the context of binary classification has been investigated under a wide variety of complexity assumptions for the collection of classifiers over which optimization is performed. In contrast, the vast majority of the works dedicated to this issue stipulate that the training dataset used to compute the empirical risk functional is composed of i.i.d. observations and involve sharp control of uniform deviation of i.i.d. averages from their expectation. Beyond the cases where training data are drawn uniformly without replacement among a large i.i.d. sample or modelled as a realization of a weakly dependent sequence of r.v.’s, statistical guarantees when the data used to train a classifier are drawn by means of a more general sampling/survey scheme and exhibit a complex dependence structure have not been documented in the literature yet. It is the main purpose of this paper to show that the theory of empirical risk minimization can be extended to situations where statistical learning is based on survey samples and knowledge of the related (first order) inclusion probabilities. Precisely, we prove that minimizing a (possibly biased) weighted version of the empirical risk, refered to as the (approximate) Horvitz-Thompson risk (HT risk), over a class of controlled complexity lead to a rate for the excess risk of the order O_\mathbbP((\kappa_N (\log N)/n)^1/2) with \kappa_N=(n/N)/\min_i≤N\pi_i, when data are sampled by means of a rejective scheme of (deterministic) size n within a statistical population of cardinality N≥n, a generalization of basic \it sampling without replacement with unequal probability weights \pi_i > 0. Extension to other sampling schemes are then established by a coupling argument. Beyond theoretical results, numerical experiments are displayed in order to show the relevance of HT risk minimization and that ignoring the sampling scheme used to generate the training dataset may completely jeopardize the learning procedure. ER -
APA
Clemencon, S., Bertail, P. & Papa, G.. (2016). Learning from Survey Training Samples: Rate Bounds for Horvitz-Thompson Risk Minimizers. Proceedings of The 8th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 63:142-157 Available from https://proceedings.mlr.press/v63/clemencon64.html.

Related Material