[edit]
Minimax M-estimation under Adversarial Contamination
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:1906-1924, 2022.
Abstract
We present a new finite-sample analysis of Catoni’s M-estimator under adversarial contamination, where an adversary is allowed to corrupt a fraction of the samples arbitrarily. We make minimal assumptions on the distribution of the uncontaminated random variables, namely, we only assume the existence of a known upper bound υε>0 on the (1+ε)th central moment of the random variables, namely, for ε∈(0,1] EX1∼D|X1−μ|1+ε≤υε.{We} provide a lower bound on the minimax error rate for the mean estimation problem under adversarial corruption under this weak assumption, and establish that the proposed M-estimator achieves this lower bound (up to multiplicative constants). When the variance is infinite, the tolerance to contamination of any estimator reduces as ε↓0. We establish a tight upper bound that characterizes this bargain. To illustrate the usefulness of the derived robust M-estimator in an online setting, we present a bandit algorithm for the partially identifiable best arm identification problem that improves upon the sample complexity of the state of the art algorithms.