Nearly secondorder optimality of online joint detection and estimation via onesample update schemes
[edit]
Proceedings of the TwentyFirst International Conference on Artificial Intelligence and Statistics, PMLR 84:519528, 2018.
Abstract
Sequential hypothesis test and changepoint detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. We show that for such problems, detection procedures based on sequential likelihood ratios with simple onesample update estimates such as online mirror descent are nearly secondorder optimal. This means that the upper bound for the algorithm performance meets the lower bound asymptotically up to a loglog factor in the falsealarm rate when it tends to zero. This is a blessing, since although the generalized likelihood ratio (GLR) statistics are optimal theoretically, but they cannot be computed recursively, and their exact computation usually requires infinite memory of historical data. We prove the nearly secondorder optimality by making a connection between sequential changepoint detection and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical examples validate our theory.
Related Material


