Is Transductive Learning Equivalent to PAC Learning?

Shaddin Dughmi, Yusuf Hakan Kalayci, Grayson York
Proceedings of The 36th International Conference on Algorithmic Learning Theory, PMLR 272:418-443, 2025.

Abstract

Much of learning theory is concerned with the design and analysis of probably approximately correct (PAC) learners. The closely related transductive model of learning has recently seen more scrutiny, with its learners often used as precursors to PAC learners. Our goal in this work is to understand and quantify the exact relationship between these two models. First, we observe that modest extensions of existing results show the models to be essentially equivalent for realizable learning for most natural loss functions, up to low order terms in the error and sample complexity. The situation for agnostic learning appears less straightforward, with sample complexities potentially separated by a 1ϵ factor. This is therefore where our main contributions lie. Our results are two-fold:
  1. For agnostic learning with bounded losses (including, for example, multiclass classification), we show that PAC learning reduces to transductive learning at the cost of low-order terms in the error and sample complexity. This is via an adaptation of the reduction of Aden-Ali et al. (2023a) to the agnostic setting.
  2. For agnostic binary classification, we show the converse: transductive learning is essentially no more difficult than PAC learning. Together with our first result this implies that the PAC and transductive models are essentially equivalent for agnostic binary classification. This is our most technical result, and involves two key steps: (a) A symmetrization argument on the agnostic one-inclusion graph (OIG) of Long (1998) to derive the worst-case agnostic transductive instance, and (b) expressing the error of the agnostic OIG algorithm for this instance in terms of the empirical Rademacher complexity of the class.
We leave as an intriguing open question whether our second result can be extended beyond binary classification to show the transductive and PAC models equivalent more broadly.

Cite this Paper


BibTeX
@InProceedings{pmlr-v272-dughmi25a, title = {Is Transductive Learning Equivalent to PAC Learning?}, author = {Dughmi, Shaddin and Kalayci, Yusuf Hakan and York, Grayson}, booktitle = {Proceedings of The 36th International Conference on Algorithmic Learning Theory}, pages = {418--443}, year = {2025}, editor = {Kamath, Gautam and Loh, Po-Ling}, volume = {272}, series = {Proceedings of Machine Learning Research}, month = {24--27 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v272/main/assets/dughmi25a/dughmi25a.pdf}, url = {https://proceedings.mlr.press/v272/dughmi25a.html}, abstract = {Much of learning theory is concerned with the design and analysis of probably approximately correct (PAC) learners. The closely related transductive model of learning has recently seen more scrutiny, with its learners often used as precursors to PAC learners. Our goal in this work is to understand and quantify the exact relationship between these two models. First, we observe that modest extensions of existing results show the models to be essentially equivalent for realizable learning for most natural loss functions, up to low order terms in the error and sample complexity. The situation for agnostic learning appears less straightforward, with sample complexities potentially separated by a $\frac{1}{\epsilon}$ factor. This is therefore where our main contributions lie. Our results are two-fold:
  1. For agnostic learning with bounded losses (including, for example, multiclass classification), we show that PAC learning reduces to transductive learning at the cost of low-order terms in the error and sample complexity. This is via an adaptation of the reduction of Aden-Ali et al. (2023a) to the agnostic setting.
  2. For agnostic binary classification, we show the converse: transductive learning is essentially no more difficult than PAC learning. Together with our first result this implies that the PAC and transductive models are essentially equivalent for agnostic binary classification. This is our most technical result, and involves two key steps: (a) A symmetrization argument on the agnostic one-inclusion graph (OIG) of Long (1998) to derive the worst-case agnostic transductive instance, and (b) expressing the error of the agnostic OIG algorithm for this instance in terms of the empirical Rademacher complexity of the class.
We leave as an intriguing open question whether our second result can be extended beyond binary classification to show the transductive and PAC models equivalent more broadly.} }
Endnote
%0 Conference Paper %T Is Transductive Learning Equivalent to PAC Learning? %A Shaddin Dughmi %A Yusuf Hakan Kalayci %A Grayson York %B Proceedings of The 36th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Gautam Kamath %E Po-Ling Loh %F pmlr-v272-dughmi25a %I PMLR %P 418--443 %U https://proceedings.mlr.press/v272/dughmi25a.html %V 272 %X Much of learning theory is concerned with the design and analysis of probably approximately correct (PAC) learners. The closely related transductive model of learning has recently seen more scrutiny, with its learners often used as precursors to PAC learners. Our goal in this work is to understand and quantify the exact relationship between these two models. First, we observe that modest extensions of existing results show the models to be essentially equivalent for realizable learning for most natural loss functions, up to low order terms in the error and sample complexity. The situation for agnostic learning appears less straightforward, with sample complexities potentially separated by a $\frac{1}{\epsilon}$ factor. This is therefore where our main contributions lie. Our results are two-fold:
  1. For agnostic learning with bounded losses (including, for example, multiclass classification), we show that PAC learning reduces to transductive learning at the cost of low-order terms in the error and sample complexity. This is via an adaptation of the reduction of Aden-Ali et al. (2023a) to the agnostic setting.
  2. For agnostic binary classification, we show the converse: transductive learning is essentially no more difficult than PAC learning. Together with our first result this implies that the PAC and transductive models are essentially equivalent for agnostic binary classification. This is our most technical result, and involves two key steps: (a) A symmetrization argument on the agnostic one-inclusion graph (OIG) of Long (1998) to derive the worst-case agnostic transductive instance, and (b) expressing the error of the agnostic OIG algorithm for this instance in terms of the empirical Rademacher complexity of the class.
We leave as an intriguing open question whether our second result can be extended beyond binary classification to show the transductive and PAC models equivalent more broadly.
APA
Dughmi, S., Kalayci, Y.H. & York, G.. (2025). Is Transductive Learning Equivalent to PAC Learning?. Proceedings of The 36th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 272:418-443 Available from https://proceedings.mlr.press/v272/dughmi25a.html.

Related Material