Label Partitioning For Sublinear Ranking

Jason Weston, Ameesh Makadia, Hector Yee
; Proceedings of the 30th International Conference on Machine Learning, PMLR 28(2):181-189, 2013.

Abstract

We consider the case of ranking a very large set of labels, items, or documents, which is common to information retrieval, recommendation, and large-scale annotation tasks. We present a general approach for converting an algorithm which has linear time in the size of the set to a sublinear one via label partitioning. Our method consists of learning an input partition and a label assignment to each partition of the space such that precision at k is optimized, which is the loss function of interest in this setting. Experiments on large-scale ranking and recommendation tasks show that our method not only makes the original linear time algorithm computationally tractable, but can also improve its performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-weston13, title = {Label Partitioning For Sublinear Ranking}, author = {Jason Weston and Ameesh Makadia and Hector Yee}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {181--189}, year = {2013}, editor = {Sanjoy Dasgupta and David McAllester}, volume = {28}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/weston13.pdf}, url = {http://proceedings.mlr.press/v28/weston13.html}, abstract = {We consider the case of ranking a very large set of labels, items, or documents, which is common to information retrieval, recommendation, and large-scale annotation tasks. We present a general approach for converting an algorithm which has linear time in the size of the set to a sublinear one via label partitioning. Our method consists of learning an input partition and a label assignment to each partition of the space such that precision at k is optimized, which is the loss function of interest in this setting. Experiments on large-scale ranking and recommendation tasks show that our method not only makes the original linear time algorithm computationally tractable, but can also improve its performance.} }
Endnote
%0 Conference Paper %T Label Partitioning For Sublinear Ranking %A Jason Weston %A Ameesh Makadia %A Hector Yee %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-weston13 %I PMLR %J Proceedings of Machine Learning Research %P 181--189 %U http://proceedings.mlr.press %V 28 %N 2 %W PMLR %X We consider the case of ranking a very large set of labels, items, or documents, which is common to information retrieval, recommendation, and large-scale annotation tasks. We present a general approach for converting an algorithm which has linear time in the size of the set to a sublinear one via label partitioning. Our method consists of learning an input partition and a label assignment to each partition of the space such that precision at k is optimized, which is the loss function of interest in this setting. Experiments on large-scale ranking and recommendation tasks show that our method not only makes the original linear time algorithm computationally tractable, but can also improve its performance.
RIS
TY - CPAPER TI - Label Partitioning For Sublinear Ranking AU - Jason Weston AU - Ameesh Makadia AU - Hector Yee BT - Proceedings of the 30th International Conference on Machine Learning PY - 2013/02/13 DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-weston13 PB - PMLR SP - 181 DP - PMLR EP - 189 L1 - http://proceedings.mlr.press/v28/weston13.pdf UR - http://proceedings.mlr.press/v28/weston13.html AB - We consider the case of ranking a very large set of labels, items, or documents, which is common to information retrieval, recommendation, and large-scale annotation tasks. We present a general approach for converting an algorithm which has linear time in the size of the set to a sublinear one via label partitioning. Our method consists of learning an input partition and a label assignment to each partition of the space such that precision at k is optimized, which is the loss function of interest in this setting. Experiments on large-scale ranking and recommendation tasks show that our method not only makes the original linear time algorithm computationally tractable, but can also improve its performance. ER -
APA
Weston, J., Makadia, A. & Yee, H.. (2013). Label Partitioning For Sublinear Ranking. Proceedings of the 30th International Conference on Machine Learning, in PMLR 28(2):181-189

Related Material