Simultaneous Learning of Trees and Representations for Extreme Classification and Density Estimation

Yacine Jernite, Anna Choromanska, David Sontag
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:1665-1674, 2017.

Abstract

We consider multi-class classification where the predictor has a hierarchical structure that allows for a very large number of labels both at train and test time. The predictive power of such models can heavily depend on the structure of the tree, and although past work showed how to learn the tree structure, it expected that the feature vectors remained static. We provide a novel algorithm to simultaneously perform representation learning for the input data and learning of the hierarchical predictor. Our approach optimizes an objective function which favors balanced and easily-separable multi-way node partitions. We theoretically analyze this objective, showing that it gives rise to a boosting style property and a bound on classification error. We next show how to extend the algorithm to conditional density estimation. We empirically validate both variants of the algorithm on text classification and language modeling, respectively, and show that they compare favorably to common baselines in terms of accuracy and running time.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-jernite17a, title = {Simultaneous Learning of Trees and Representations for Extreme Classification and Density Estimation}, author = {Yacine Jernite and Anna Choromanska and David Sontag}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {1665--1674}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/jernite17a/jernite17a.pdf}, url = { http://proceedings.mlr.press/v70/jernite17a.html }, abstract = {We consider multi-class classification where the predictor has a hierarchical structure that allows for a very large number of labels both at train and test time. The predictive power of such models can heavily depend on the structure of the tree, and although past work showed how to learn the tree structure, it expected that the feature vectors remained static. We provide a novel algorithm to simultaneously perform representation learning for the input data and learning of the hierarchical predictor. Our approach optimizes an objective function which favors balanced and easily-separable multi-way node partitions. We theoretically analyze this objective, showing that it gives rise to a boosting style property and a bound on classification error. We next show how to extend the algorithm to conditional density estimation. We empirically validate both variants of the algorithm on text classification and language modeling, respectively, and show that they compare favorably to common baselines in terms of accuracy and running time.} }
Endnote
%0 Conference Paper %T Simultaneous Learning of Trees and Representations for Extreme Classification and Density Estimation %A Yacine Jernite %A Anna Choromanska %A David Sontag %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-jernite17a %I PMLR %P 1665--1674 %U http://proceedings.mlr.press/v70/jernite17a.html %V 70 %X We consider multi-class classification where the predictor has a hierarchical structure that allows for a very large number of labels both at train and test time. The predictive power of such models can heavily depend on the structure of the tree, and although past work showed how to learn the tree structure, it expected that the feature vectors remained static. We provide a novel algorithm to simultaneously perform representation learning for the input data and learning of the hierarchical predictor. Our approach optimizes an objective function which favors balanced and easily-separable multi-way node partitions. We theoretically analyze this objective, showing that it gives rise to a boosting style property and a bound on classification error. We next show how to extend the algorithm to conditional density estimation. We empirically validate both variants of the algorithm on text classification and language modeling, respectively, and show that they compare favorably to common baselines in terms of accuracy and running time.
APA
Jernite, Y., Choromanska, A. & Sontag, D.. (2017). Simultaneous Learning of Trees and Representations for Extreme Classification and Density Estimation. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:1665-1674 Available from http://proceedings.mlr.press/v70/jernite17a.html .

Related Material