[edit]
Is Transductive Learning Equivalent to PAC Learning?
Proceedings of The 36th International Conference on Algorithmic Learning Theory, PMLR 272:418-443, 2025.
Abstract
Much of learning theory is concerned with the design and analysis of probably approximately correct (PAC) learners. The closely related transductive model of learning has recently seen more scrutiny, with its learners often used as precursors to PAC learners. Our goal in this work is to understand and quantify the exact relationship between these two models. First, we observe that modest extensions of existing results show the models to be essentially equivalent for realizable learning for most natural loss functions, up to low order terms in the error and sample complexity. The situation for agnostic learning appears less straightforward, with sample complexities potentially separated by a 1ϵ factor. This is therefore where our main contributions lie. Our results are two-fold:
- For agnostic learning with bounded losses (including, for example, multiclass classification), we show that PAC learning reduces to transductive learning at the cost of low-order terms in the error and sample complexity. This is via an adaptation of the reduction of Aden-Ali et al. (2023a) to the agnostic setting.
- For agnostic binary classification, we show the converse: transductive learning is essentially no more difficult than PAC learning. Together with our first result this implies that the PAC and transductive models are essentially equivalent for agnostic binary classification. This is our most technical result, and involves two key steps: (a) A symmetrization argument on the agnostic one-inclusion graph (OIG) of Long (1998) to derive the worst-case agnostic transductive instance, and (b) expressing the error of the agnostic OIG algorithm for this instance in terms of the empirical Rademacher complexity of the class.