[edit]
A constrained risk inequality for general losses
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:802-810, 2021.
Abstract
We provide a general constrained risk inequality that applies to arbitrary non-decreasing losses, extending a result of Brown and Low [\emph{Ann. Stat. 1996}]. Given two distributions P0 and P1, we find a lower bound for the risk of estimating a parameter θ(P1) under P1 given an upper bound on the risk of estimating the parameter θ(P0) under P0. The inequality is a useful pedagogical tool, as its proof relies only on the Cauchy-Schwartz inequality, it applies to general losses, and it transparently gives risk lower bounds on super-efficient and adaptive estimators.