Sample-Near-Optimal Agnostic Boosting with Improved Running Time

Arthur da Cunha, Mikael Møller Høgsgaard, Andrea Paudice
Proceedings of The 37th International Conference on Algorithmic Learning Theory, PMLR 313:1-27, 2026.

Abstract

Boosting is a powerful method that turns weak learners, which perform only slightly better than random guessing, into strong learners with high accuracy. While boosting is well understood in the classic setting, it is less so in the agnostic case, where no assumptions are made about the data. Indeed, only recently was the sample complexity of agnostic boosting nearly settled (da Cunha et al., 2025), but the known algorithm achieving this bound has exponential running time. In this work, we propose the first agnostic boosting algorithm with near-optimal sample complexity, running in time polynomial in the sample size when considering the other parameters of the problem fixed.

Cite this Paper


BibTeX
@InProceedings{pmlr-v313-cunha26a, title = {Sample-Near-Optimal Agnostic Boosting with Improved Running Time}, author = {da Cunha, Arthur and H{\o}gsgaard, Mikael M{\o}ller and Paudice, Andrea}, booktitle = {Proceedings of The 37th International Conference on Algorithmic Learning Theory}, pages = {1--27}, year = {2026}, editor = {Telgarsky, Matus and Ullman, Jonathan}, volume = {313}, series = {Proceedings of Machine Learning Research}, month = {23--26 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v313/main/assets/cunha26a/cunha26a.pdf}, url = {https://proceedings.mlr.press/v313/cunha26a.html}, abstract = {Boosting is a powerful method that turns weak learners, which perform only slightly better than random guessing, into strong learners with high accuracy. While boosting is well understood in the classic setting, it is less so in the agnostic case, where no assumptions are made about the data. Indeed, only recently was the sample complexity of agnostic boosting nearly settled (da Cunha et al., 2025), but the known algorithm achieving this bound has exponential running time. In this work, we propose the first agnostic boosting algorithm with near-optimal sample complexity, running in time polynomial in the sample size when considering the other parameters of the problem fixed.} }
Endnote
%0 Conference Paper %T Sample-Near-Optimal Agnostic Boosting with Improved Running Time %A Arthur da Cunha %A Mikael Møller Høgsgaard %A Andrea Paudice %B Proceedings of The 37th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2026 %E Matus Telgarsky %E Jonathan Ullman %F pmlr-v313-cunha26a %I PMLR %P 1--27 %U https://proceedings.mlr.press/v313/cunha26a.html %V 313 %X Boosting is a powerful method that turns weak learners, which perform only slightly better than random guessing, into strong learners with high accuracy. While boosting is well understood in the classic setting, it is less so in the agnostic case, where no assumptions are made about the data. Indeed, only recently was the sample complexity of agnostic boosting nearly settled (da Cunha et al., 2025), but the known algorithm achieving this bound has exponential running time. In this work, we propose the first agnostic boosting algorithm with near-optimal sample complexity, running in time polynomial in the sample size when considering the other parameters of the problem fixed.
APA
da Cunha, A., Høgsgaard, M.M. & Paudice, A.. (2026). Sample-Near-Optimal Agnostic Boosting with Improved Running Time. Proceedings of The 37th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 313:1-27 Available from https://proceedings.mlr.press/v313/cunha26a.html.

Related Material