Sharp bounds on aggregate expert error

Aryeh Kontorovich, Ariel Avital
Proceedings of The 36th International Conference on Algorithmic Learning Theory, PMLR 272:653-663, 2025.

Abstract

We revisit the classic problem of aggregating binary advice from conditionally independent experts, also known as the Naive Bayes setting. Our quantity of interest is the error probability of the optimal decision rule. In the case of symmetric errors (sensitivity = specificity), reasonably tight bounds on the optimal error probability are known. In the general asymmetric case, we are not aware of any nontrivial estimates on this quantity. Our contribution consists of sharp upper and lower bounds on the optimal error probability in the general case, which recover and sharpen the best known results in the symmetric special case. Additionally, our bounds are apparently the first to take the bias into account. Since this turns out to be closely connected to bounding the total variation distance between two product distributions, our results also have bearing on this important and challenging problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v272-kontorovich25a, title = {Sharp bounds on aggregate expert error}, author = {Kontorovich, Aryeh and Avital, Ariel}, booktitle = {Proceedings of The 36th International Conference on Algorithmic Learning Theory}, pages = {653--663}, year = {2025}, editor = {Kamath, Gautam and Loh, Po-Ling}, volume = {272}, series = {Proceedings of Machine Learning Research}, month = {24--27 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v272/main/assets/kontorovich25a/kontorovich25a.pdf}, url = {https://proceedings.mlr.press/v272/kontorovich25a.html}, abstract = {We revisit the classic problem of aggregating binary advice from conditionally independent experts, also known as the Naive Bayes setting. Our quantity of interest is the error probability of the optimal decision rule. In the case of symmetric errors (sensitivity = specificity), reasonably tight bounds on the optimal error probability are known. In the general asymmetric case, we are not aware of any nontrivial estimates on this quantity. Our contribution consists of sharp upper and lower bounds on the optimal error probability in the general case, which recover and sharpen the best known results in the symmetric special case. Additionally, our bounds are apparently the first to take the bias into account. Since this turns out to be closely connected to bounding the total variation distance between two product distributions, our results also have bearing on this important and challenging problem.} }
Endnote
%0 Conference Paper %T Sharp bounds on aggregate expert error %A Aryeh Kontorovich %A Ariel Avital %B Proceedings of The 36th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Gautam Kamath %E Po-Ling Loh %F pmlr-v272-kontorovich25a %I PMLR %P 653--663 %U https://proceedings.mlr.press/v272/kontorovich25a.html %V 272 %X We revisit the classic problem of aggregating binary advice from conditionally independent experts, also known as the Naive Bayes setting. Our quantity of interest is the error probability of the optimal decision rule. In the case of symmetric errors (sensitivity = specificity), reasonably tight bounds on the optimal error probability are known. In the general asymmetric case, we are not aware of any nontrivial estimates on this quantity. Our contribution consists of sharp upper and lower bounds on the optimal error probability in the general case, which recover and sharpen the best known results in the symmetric special case. Additionally, our bounds are apparently the first to take the bias into account. Since this turns out to be closely connected to bounding the total variation distance between two product distributions, our results also have bearing on this important and challenging problem.
APA
Kontorovich, A. & Avital, A.. (2025). Sharp bounds on aggregate expert error. Proceedings of The 36th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 272:653-663 Available from https://proceedings.mlr.press/v272/kontorovich25a.html.

Related Material