Improving Dual-Encoder Training through Dynamic Indexes for Negative Mining

Nicholas Monath, Manzil Zaheer, Kelsey Allen, Andrew Mccallum
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:9308-9330, 2023.

Abstract

Dual encoder models are ubiquitous in modern classification and retrieval. Crucial for training such dual encoders is an accurate estimation of gradients from the partition function of the softmax over the large output space; this requires finding negative targets that contribute most significantly (‘hard negatives). Since dual encoder model parameters change during training, the use of traditional static nearest neighbor indexes can be sub-optimal. These static indexes (1) periodically require expensive re-building of the index, which in turn requires (2) expensive re-encoding of all targets using updated model parameters. This paper addresses both of these challenges. First, we introduce an algorithm that uses a tree structure to approximate the softmax with provable bounds and that dynamically maintains the tree. Second, we approximate the effect of a gradient update on target encodings with an efficient Nystrom low-rank approximation. In our empirical study on datasets with over twenty million targets, our approach cuts error by half in relation to oracle brute-force negative mining. Furthermore, our method surpasses prior state-of-the-art while using 150x less accelerator memory.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-monath23a, title = {Improving Dual-Encoder Training through Dynamic Indexes for Negative Mining}, author = {Monath, Nicholas and Zaheer, Manzil and Allen, Kelsey and Mccallum, Andrew}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {9308--9330}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/monath23a/monath23a.pdf}, url = {https://proceedings.mlr.press/v206/monath23a.html}, abstract = {Dual encoder models are ubiquitous in modern classification and retrieval. Crucial for training such dual encoders is an accurate estimation of gradients from the partition function of the softmax over the large output space; this requires finding negative targets that contribute most significantly (‘hard negatives). Since dual encoder model parameters change during training, the use of traditional static nearest neighbor indexes can be sub-optimal. These static indexes (1) periodically require expensive re-building of the index, which in turn requires (2) expensive re-encoding of all targets using updated model parameters. This paper addresses both of these challenges. First, we introduce an algorithm that uses a tree structure to approximate the softmax with provable bounds and that dynamically maintains the tree. Second, we approximate the effect of a gradient update on target encodings with an efficient Nystrom low-rank approximation. In our empirical study on datasets with over twenty million targets, our approach cuts error by half in relation to oracle brute-force negative mining. Furthermore, our method surpasses prior state-of-the-art while using 150x less accelerator memory.} }
Endnote
%0 Conference Paper %T Improving Dual-Encoder Training through Dynamic Indexes for Negative Mining %A Nicholas Monath %A Manzil Zaheer %A Kelsey Allen %A Andrew Mccallum %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-monath23a %I PMLR %P 9308--9330 %U https://proceedings.mlr.press/v206/monath23a.html %V 206 %X Dual encoder models are ubiquitous in modern classification and retrieval. Crucial for training such dual encoders is an accurate estimation of gradients from the partition function of the softmax over the large output space; this requires finding negative targets that contribute most significantly (‘hard negatives). Since dual encoder model parameters change during training, the use of traditional static nearest neighbor indexes can be sub-optimal. These static indexes (1) periodically require expensive re-building of the index, which in turn requires (2) expensive re-encoding of all targets using updated model parameters. This paper addresses both of these challenges. First, we introduce an algorithm that uses a tree structure to approximate the softmax with provable bounds and that dynamically maintains the tree. Second, we approximate the effect of a gradient update on target encodings with an efficient Nystrom low-rank approximation. In our empirical study on datasets with over twenty million targets, our approach cuts error by half in relation to oracle brute-force negative mining. Furthermore, our method surpasses prior state-of-the-art while using 150x less accelerator memory.
APA
Monath, N., Zaheer, M., Allen, K. & Mccallum, A.. (2023). Improving Dual-Encoder Training through Dynamic Indexes for Negative Mining. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:9308-9330 Available from https://proceedings.mlr.press/v206/monath23a.html.

Related Material