On the Robustness of Text Vectorizers

Rémi Catellier, Samuel Vaiter, Damien Garreau
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:3782-3814, 2023.

Abstract

A fundamental issue in machine learning is the robustness of the model with respect to changes in the input. In natural language processing, models typically contain a first embedding layer, transforming a sequence of tokens into vector representations. While the robustness with respect to changes of continuous inputs is well-understood, the situation is less clear when considering discrete changes, for instance replacing a word by another in an input sentence. Our work formally proves that popular embedding schemes, such as concatenation, TF-IDF, and Paragraph Vector (a.k.a. doc2vec), exhibit robustness in the Hölder or Lipschitz sense with respect to the Hamming distance. We provide quantitative bounds for these schemes and demonstrate how the constants involved are affected by the length of the document. These findings are exemplified through a series of numerical examples.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-catellier23a, title = {On the Robustness of Text Vectorizers}, author = {Catellier, R\'{e}mi and Vaiter, Samuel and Garreau, Damien}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {3782--3814}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/catellier23a/catellier23a.pdf}, url = {https://proceedings.mlr.press/v202/catellier23a.html}, abstract = {A fundamental issue in machine learning is the robustness of the model with respect to changes in the input. In natural language processing, models typically contain a first embedding layer, transforming a sequence of tokens into vector representations. While the robustness with respect to changes of continuous inputs is well-understood, the situation is less clear when considering discrete changes, for instance replacing a word by another in an input sentence. Our work formally proves that popular embedding schemes, such as concatenation, TF-IDF, and Paragraph Vector (a.k.a. doc2vec), exhibit robustness in the Hölder or Lipschitz sense with respect to the Hamming distance. We provide quantitative bounds for these schemes and demonstrate how the constants involved are affected by the length of the document. These findings are exemplified through a series of numerical examples.} }
Endnote
%0 Conference Paper %T On the Robustness of Text Vectorizers %A Rémi Catellier %A Samuel Vaiter %A Damien Garreau %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-catellier23a %I PMLR %P 3782--3814 %U https://proceedings.mlr.press/v202/catellier23a.html %V 202 %X A fundamental issue in machine learning is the robustness of the model with respect to changes in the input. In natural language processing, models typically contain a first embedding layer, transforming a sequence of tokens into vector representations. While the robustness with respect to changes of continuous inputs is well-understood, the situation is less clear when considering discrete changes, for instance replacing a word by another in an input sentence. Our work formally proves that popular embedding schemes, such as concatenation, TF-IDF, and Paragraph Vector (a.k.a. doc2vec), exhibit robustness in the Hölder or Lipschitz sense with respect to the Hamming distance. We provide quantitative bounds for these schemes and demonstrate how the constants involved are affected by the length of the document. These findings are exemplified through a series of numerical examples.
APA
Catellier, R., Vaiter, S. & Garreau, D.. (2023). On the Robustness of Text Vectorizers. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:3782-3814 Available from https://proceedings.mlr.press/v202/catellier23a.html.

Related Material