Analogies Explained: Towards Understanding Word Embeddings

Carl Allen, Timothy Hospedales
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:223-231, 2019.

Abstract

Word embeddings generated by neural network methods such as word2vec (W2V) are well known to exhibit seemingly linear behaviour, e.g. the embeddings of analogy “woman is to queen as man is to king” approximately describe a parallelogram. This property is particularly intriguing since the embeddings are not trained to achieve it. Several explanations have been proposed, but each introduces assumptions that do not hold in practice. We derive a probabilistically grounded definition of paraphrasing that we re-interpret as word transformation, a mathematical description of “$w_x$ is to $w_y$”. From these concepts we prove existence of linear relationship between W2V-type embeddings that underlie the analogical phenomenon, identifying explicit error terms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-allen19a, title = {Analogies Explained: Towards Understanding Word Embeddings}, author = {Allen, Carl and Hospedales, Timothy}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {223--231}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/allen19a/allen19a.pdf}, url = {https://proceedings.mlr.press/v97/allen19a.html}, abstract = {Word embeddings generated by neural network methods such as word2vec (W2V) are well known to exhibit seemingly linear behaviour, e.g. the embeddings of analogy “woman is to queen as man is to king” approximately describe a parallelogram. This property is particularly intriguing since the embeddings are not trained to achieve it. Several explanations have been proposed, but each introduces assumptions that do not hold in practice. We derive a probabilistically grounded definition of paraphrasing that we re-interpret as word transformation, a mathematical description of “$w_x$ is to $w_y$”. From these concepts we prove existence of linear relationship between W2V-type embeddings that underlie the analogical phenomenon, identifying explicit error terms.} }
Endnote
%0 Conference Paper %T Analogies Explained: Towards Understanding Word Embeddings %A Carl Allen %A Timothy Hospedales %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-allen19a %I PMLR %P 223--231 %U https://proceedings.mlr.press/v97/allen19a.html %V 97 %X Word embeddings generated by neural network methods such as word2vec (W2V) are well known to exhibit seemingly linear behaviour, e.g. the embeddings of analogy “woman is to queen as man is to king” approximately describe a parallelogram. This property is particularly intriguing since the embeddings are not trained to achieve it. Several explanations have been proposed, but each introduces assumptions that do not hold in practice. We derive a probabilistically grounded definition of paraphrasing that we re-interpret as word transformation, a mathematical description of “$w_x$ is to $w_y$”. From these concepts we prove existence of linear relationship between W2V-type embeddings that underlie the analogical phenomenon, identifying explicit error terms.
APA
Allen, C. & Hospedales, T.. (2019). Analogies Explained: Towards Understanding Word Embeddings. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:223-231 Available from https://proceedings.mlr.press/v97/allen19a.html.

Related Material