An Almost Optimal PAC Algorithm

Hans U. Simon
; Proceedings of The 28th Conference on Learning Theory, PMLR 40:1552-1563, 2015.

Abstract

The best currently known general lower and upper bounds on the number of labeled examples needed for learning a concept class in the PAC framework (the realizable case) do not perfectly match: they leave a gap of order \log(1/ε) (resp. a gap which is logarithmic in another one of the relevant parameters). It is an unresolved question whether there exists an “optimal PAC algorithm” which establishes a general upper bound with precisely the same order of magnitude as the general lower bound. According to a result of Auer and Ortner, there is no way for showing that arbitrary consistent algorithms are optimal because they can provably differ from optimality by factor \log(1/ε). In contrast to this result, we show that every consistent algorithm L (even a provably suboptimal one) induces a family (L_K)_K\ge1 of PAC algorithms (with 2K-1 calls of L as a subroutine) which come very close to optimality: the number of labeled examples needed by L_K exceeds the general lower bound only by factor \ell_K(1/\epsillon) where \ell_K denotes (a truncated version of) the K-times iterated logarithm. Moreover, L_K is applicable to any concept class C of finite VC-dimension and it can be implemented efficiently whenever the consistency problem for C is feasible. We show furthermore that, for every consistent algorithm L, L_2 is an optimal PAC algorithm for precisely the same concept classes which were used by Auer and Ortner for showing the existence of suboptimal consistent algorithms. This can be seen as an indication that L_K may have an even better performance than it is suggested by our worstcase analysis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v40-Simon15a, title = {An Almost Optimal PAC Algorithm}, author = {Hans U. Simon}, booktitle = {Proceedings of The 28th Conference on Learning Theory}, pages = {1552--1563}, year = {2015}, editor = {Peter Grünwald and Elad Hazan and Satyen Kale}, volume = {40}, series = {Proceedings of Machine Learning Research}, address = {Paris, France}, month = {03--06 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v40/Simon15a.pdf}, url = {http://proceedings.mlr.press/v40/Simon15a.html}, abstract = {The best currently known general lower and upper bounds on the number of labeled examples needed for learning a concept class in the PAC framework (the realizable case) do not perfectly match: they leave a gap of order \log(1/ε) (resp. a gap which is logarithmic in another one of the relevant parameters). It is an unresolved question whether there exists an “optimal PAC algorithm” which establishes a general upper bound with precisely the same order of magnitude as the general lower bound. According to a result of Auer and Ortner, there is no way for showing that arbitrary consistent algorithms are optimal because they can provably differ from optimality by factor \log(1/ε). In contrast to this result, we show that every consistent algorithm L (even a provably suboptimal one) induces a family (L_K)_K\ge1 of PAC algorithms (with 2K-1 calls of L as a subroutine) which come very close to optimality: the number of labeled examples needed by L_K exceeds the general lower bound only by factor \ell_K(1/\epsillon) where \ell_K denotes (a truncated version of) the K-times iterated logarithm. Moreover, L_K is applicable to any concept class C of finite VC-dimension and it can be implemented efficiently whenever the consistency problem for C is feasible. We show furthermore that, for every consistent algorithm L, L_2 is an optimal PAC algorithm for precisely the same concept classes which were used by Auer and Ortner for showing the existence of suboptimal consistent algorithms. This can be seen as an indication that L_K may have an even better performance than it is suggested by our worstcase analysis.} }
Endnote
%0 Conference Paper %T An Almost Optimal PAC Algorithm %A Hans U. Simon %B Proceedings of The 28th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2015 %E Peter Grünwald %E Elad Hazan %E Satyen Kale %F pmlr-v40-Simon15a %I PMLR %J Proceedings of Machine Learning Research %P 1552--1563 %U http://proceedings.mlr.press %V 40 %W PMLR %X The best currently known general lower and upper bounds on the number of labeled examples needed for learning a concept class in the PAC framework (the realizable case) do not perfectly match: they leave a gap of order \log(1/ε) (resp. a gap which is logarithmic in another one of the relevant parameters). It is an unresolved question whether there exists an “optimal PAC algorithm” which establishes a general upper bound with precisely the same order of magnitude as the general lower bound. According to a result of Auer and Ortner, there is no way for showing that arbitrary consistent algorithms are optimal because they can provably differ from optimality by factor \log(1/ε). In contrast to this result, we show that every consistent algorithm L (even a provably suboptimal one) induces a family (L_K)_K\ge1 of PAC algorithms (with 2K-1 calls of L as a subroutine) which come very close to optimality: the number of labeled examples needed by L_K exceeds the general lower bound only by factor \ell_K(1/\epsillon) where \ell_K denotes (a truncated version of) the K-times iterated logarithm. Moreover, L_K is applicable to any concept class C of finite VC-dimension and it can be implemented efficiently whenever the consistency problem for C is feasible. We show furthermore that, for every consistent algorithm L, L_2 is an optimal PAC algorithm for precisely the same concept classes which were used by Auer and Ortner for showing the existence of suboptimal consistent algorithms. This can be seen as an indication that L_K may have an even better performance than it is suggested by our worstcase analysis.
RIS
TY - CPAPER TI - An Almost Optimal PAC Algorithm AU - Hans U. Simon BT - Proceedings of The 28th Conference on Learning Theory PY - 2015/06/26 DA - 2015/06/26 ED - Peter Grünwald ED - Elad Hazan ED - Satyen Kale ID - pmlr-v40-Simon15a PB - PMLR SP - 1552 DP - PMLR EP - 1563 L1 - http://proceedings.mlr.press/v40/Simon15a.pdf UR - http://proceedings.mlr.press/v40/Simon15a.html AB - The best currently known general lower and upper bounds on the number of labeled examples needed for learning a concept class in the PAC framework (the realizable case) do not perfectly match: they leave a gap of order \log(1/ε) (resp. a gap which is logarithmic in another one of the relevant parameters). It is an unresolved question whether there exists an “optimal PAC algorithm” which establishes a general upper bound with precisely the same order of magnitude as the general lower bound. According to a result of Auer and Ortner, there is no way for showing that arbitrary consistent algorithms are optimal because they can provably differ from optimality by factor \log(1/ε). In contrast to this result, we show that every consistent algorithm L (even a provably suboptimal one) induces a family (L_K)_K\ge1 of PAC algorithms (with 2K-1 calls of L as a subroutine) which come very close to optimality: the number of labeled examples needed by L_K exceeds the general lower bound only by factor \ell_K(1/\epsillon) where \ell_K denotes (a truncated version of) the K-times iterated logarithm. Moreover, L_K is applicable to any concept class C of finite VC-dimension and it can be implemented efficiently whenever the consistency problem for C is feasible. We show furthermore that, for every consistent algorithm L, L_2 is an optimal PAC algorithm for precisely the same concept classes which were used by Auer and Ortner for showing the existence of suboptimal consistent algorithms. This can be seen as an indication that L_K may have an even better performance than it is suggested by our worstcase analysis. ER -
APA
Simon, H.U.. (2015). An Almost Optimal PAC Algorithm. Proceedings of The 28th Conference on Learning Theory, in PMLR 40:1552-1563

Related Material