A Tale of Two Efficient and Informative Negative Sampling Distributions

Shabnam Daghaghi, Tharun Medini, Nicholas Meisburger, Beidi Chen, Mengnan Zhao, Anshumali Shrivastava
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:2319-2329, 2021.

Abstract

Softmax classifiers with a very large number of classes naturally occur in many applications such as natural language processing and information retrieval. The calculation of full softmax is costly from the computational and energy perspective. There have been various sampling approaches to overcome this challenge, popularly known as negative sampling (NS). Ideally, NS should sample negative classes from a distribution that is dependent on the input data, the current parameters, and the correct positive class. Unfortunately, due to the dynamically updated parameters and data samples, there is no sampling scheme that is provably adaptive and samples the negative classes efficiently. Therefore, alternative heuristics like random sampling, static frequency-based sampling, or learning-based biased sampling, which primarily trade either the sampling cost or the adaptivity of samples per iteration are adopted. In this paper, we show two classes of distributions where the sampling scheme is truly adaptive and provably generates negative samples in near-constant time. Our implementation in C++ on CPU is significantly superior, both in terms of wall-clock time and accuracy, compared to the most optimized TensorFlow implementations of other popular negative sampling approaches on powerful NVIDIA V100 GPU.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-daghaghi21a, title = {A Tale of Two Efficient and Informative Negative Sampling Distributions}, author = {Daghaghi, Shabnam and Medini, Tharun and Meisburger, Nicholas and Chen, Beidi and Zhao, Mengnan and Shrivastava, Anshumali}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {2319--2329}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/daghaghi21a/daghaghi21a.pdf}, url = {https://proceedings.mlr.press/v139/daghaghi21a.html}, abstract = {Softmax classifiers with a very large number of classes naturally occur in many applications such as natural language processing and information retrieval. The calculation of full softmax is costly from the computational and energy perspective. There have been various sampling approaches to overcome this challenge, popularly known as negative sampling (NS). Ideally, NS should sample negative classes from a distribution that is dependent on the input data, the current parameters, and the correct positive class. Unfortunately, due to the dynamically updated parameters and data samples, there is no sampling scheme that is provably adaptive and samples the negative classes efficiently. Therefore, alternative heuristics like random sampling, static frequency-based sampling, or learning-based biased sampling, which primarily trade either the sampling cost or the adaptivity of samples per iteration are adopted. In this paper, we show two classes of distributions where the sampling scheme is truly adaptive and provably generates negative samples in near-constant time. Our implementation in C++ on CPU is significantly superior, both in terms of wall-clock time and accuracy, compared to the most optimized TensorFlow implementations of other popular negative sampling approaches on powerful NVIDIA V100 GPU.} }
Endnote
%0 Conference Paper %T A Tale of Two Efficient and Informative Negative Sampling Distributions %A Shabnam Daghaghi %A Tharun Medini %A Nicholas Meisburger %A Beidi Chen %A Mengnan Zhao %A Anshumali Shrivastava %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-daghaghi21a %I PMLR %P 2319--2329 %U https://proceedings.mlr.press/v139/daghaghi21a.html %V 139 %X Softmax classifiers with a very large number of classes naturally occur in many applications such as natural language processing and information retrieval. The calculation of full softmax is costly from the computational and energy perspective. There have been various sampling approaches to overcome this challenge, popularly known as negative sampling (NS). Ideally, NS should sample negative classes from a distribution that is dependent on the input data, the current parameters, and the correct positive class. Unfortunately, due to the dynamically updated parameters and data samples, there is no sampling scheme that is provably adaptive and samples the negative classes efficiently. Therefore, alternative heuristics like random sampling, static frequency-based sampling, or learning-based biased sampling, which primarily trade either the sampling cost or the adaptivity of samples per iteration are adopted. In this paper, we show two classes of distributions where the sampling scheme is truly adaptive and provably generates negative samples in near-constant time. Our implementation in C++ on CPU is significantly superior, both in terms of wall-clock time and accuracy, compared to the most optimized TensorFlow implementations of other popular negative sampling approaches on powerful NVIDIA V100 GPU.
APA
Daghaghi, S., Medini, T., Meisburger, N., Chen, B., Zhao, M. & Shrivastava, A.. (2021). A Tale of Two Efficient and Informative Negative Sampling Distributions. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:2319-2329 Available from https://proceedings.mlr.press/v139/daghaghi21a.html.

Related Material