[edit]

# Superconstant Inapproximability of Decision Tree Learning

*Proceedings of Thirty Seventh Conference on Learning Theory*, PMLR 247:2979-3010, 2024.

#### Abstract

We consider the task of properly PAC learning decision trees with queries. Recent work of Koch, Strassle, and Tan showed that the strictest version of this task, where the hypothesis tree T is required to be optimally small, is NP-hard. Their work leaves open the question of whether the task remains intractable if T is only required to be close to optimal, say within a factor of 2, rather than exactly optimal. We answer this affirmatively and show that the task indeed remains NP-hard even if T is allowed to be within any constant factor of optimal. More generally, our result allows for a smooth tradeoff between the hardness assumption and inapproximability factor. As Koch et al.’s techniques do not appear to be amenable to such a strengthening, we first recover their result with a new and simpler proof, which we couple with a new XOR lemma for decision trees. While there is a large body of work on XOR lemmas for decision trees, our setting necessitates parameters that are extremely sharp and are not known to be attainable by existing such lemmas. Our work also carries new implications for the related problem of Decision Tree Minimization.