[edit]
Learnability of Non-I.I.D.
Proceedings of The 8th Asian Conference on Machine Learning, PMLR 63:158-173, 2016.
Abstract
Learnability has always been one of the most central problems in learning theory. Most previous studies on this issue were based on the assumption that the samples are drawn independently and identically according to an underlying (unknown) distribution. The i.i.d. assumption, however, does not hold in many real applications. In this paper, we study the learnability of problems where the samples are drawn from empirical process of stationary β-mixing sequence, which has been a widely-used assumption implying a dependence weaken over time in training samples. By utilizing the independent blocks technique, we provide a sufficient and necessary condition for learnability, that is, average stability is equivalent to learnability with AERM (Asymptotic Empirical Risk Minimization) in the non-i.i.d. learning setting. In addition, we also discuss the generalization error when the test variable is dependent on the training sample.