This post provides a `less maths, more intuition’ overview of Analogies Explained: Towards Understanding Word Embeddings (ICML, 2019, Best Paper Honourable Mention). The outline follows that of the conference presentation. Target audience: general machine learning, NLP, computational linguistics.

Background

Word Embeddings

Word embeddings are numerical vector representations of words. Each entry, or dimension, of a word embedding can be thought of as capturing some semantic or syntactic feature of the word, and the full vector can be considered as co-ordinates of the word in a high-dimensional space. Word embeddings can be generated explicitly, e.g. from rows of word co-occurrence statistics (or low-rank approximations of such statistics); or by neural network methods such as Word2Vec (W2V) or Glove.

The latter, neural word embeddings, are found to be highly useful in natural language processing (NLP) tasks, such as evaluating word similarity, identifying named entities and assessing positive or negative sentiment in a passage of text (e.g. a customer review).

Analogies

An intriguing property of neural word embeddings is that analogies can often be solved simply by adding/subtracting word embeddings. For example, the classic analogy:

is to as is to

can be solved using word embeddings by finding that closest to , which turns out to be ($\mathbf{w}_{x}$ denotes the embedding of word $x$). Note that words in the question (e.g. , and ) are typically omitted from the search. This suggests the relationship:

or, in geometric terms, that word embeddings of analogies approximately form parallelograms:

.

Whilst this fits our intuition, the phenomenon is intriguing since word embeddings are not trained to achieve it! In practice, word embeddings of analogies do not perfectly form parallelograms:

This shows the (exact) parallelogram formed by , and fixed in the -plane and a selection of word embeddings shown relative to it. We see that the embedding of does not sit at its corner, but is the closest to it. Word embeddings of related words, e.g. and , lie relatively close by and random unrelated words are further away.

We explain the relationship between word embeddings of analogies \eqref{eq:one}, by explaining the gap (indicated) between and ; and why it is small, often smallest, for the word that completes the analogy.

To understand why semantic relationships between words give rise to geometric relationships between word embeddings, we first consider what W2V embeddings learn.

Word2Vec

W2V (SkipGram with negative sampling) is an algorithm that generates word embeddings by training the weights of a 2-layer “neural network” to predict context words (i.e. words that fall within a context window of fixed size ) around each word (referred to as a target word) across a text corpus.

Predicting , for all in a dictionary of all unique words , was initially considered with a softmax function, but instead a sigmoid function and negative sampling were used to reduce computational cost.

Levy & Goldberg (2014) showed that, as a result, the weight matrices (columns of which are the word embeddings ) approximately factorise a matrix of shifted Pointwise Mutual Information (PMI):

where $k$ is the chosen number of negative samples and

Dropping the shift term “ ”, an artefact of the W2V algorithm (that we reconsider in the paper, Sec 5.5, 6.8), the relationship shows that an embedding can be considered a low-dimensional projection of a row of the PMI matrix, (a PMI vector).





Proving the embedding relationship of analogies

From above, it can be seen that the additive relationship between word embeddings of analogies \eqref{eq:one} follows if (a) an equivalent relationship exists between PMI vectors, i.e.

and (b) vector addition is sufficiently preserved under the low-rank projection induced by the loss function, as readily achieved by a least squares loss function and as approximated by W2V and Glove.

To prove that relationship \eqref{eq:two} arises between PMI vectors of an analogy, we show that \eqref{eq:two} follows from a particular paraphrase relationship, which is, in turn, shown to be equivalent to an analogy.

Paraphrases

When we say a word paraphrases a set of words , we mean, intuitively, that they are semantically interchangeable in the text. For example, where appears, we might instead see and close together. Mathematically, a best choice paraphrase word can be defined as that which maximises the likelihood of the context words observed around . In other words, the distribution of words observed around $w_*$ defined over the dictionary , should be similar to that around , as measured by Kullback-Leibler (KL) divergence. Whilst these distributions are discrete and unordered, we can picture this as:

Formally, we say paraphrases $\mathcal{W}$ if the paraphrase error is (element-wise) small, where:

To see the relevance of paraphrases, we compare the sum of PMI vectors of two words to that of any word by considering each ( ) component of the difference vector :

We see that the difference can be written as a paraphrase error, small only if paraphrases , and dependence error terms inherent to and that do not depend on $w_*$ . Formally, we have:

Lemma 1: For any word and word set , , where is the context window size:

This connects paraphrasing to PMI vector addition, as appears in \eqref{eq:two}. To develop this further, paraphrasing is generalised, replacing by , to a relationship between any two word sets. The underlying principle remains the same: word sets paraphrase one another if the distributions of context words around them are similar (indeed, the original paraphrase definition is recovered if contains a single word). Analogously to above, we find:

Lemma 2: For any word sets ; :

We can apply this to our example by setting and , whereby

if paraphrases (meaning is small), subject to net statistical dependencies of words within and , i.e. and .

Thus, \eqref{eq:two} holds, subject to dependence error, if paraphrases .

This establishes a semantic relationship as a sufficient condition for the geometric relationship we aim to explain – but that semantic relationship is not an analogy. What remains then, is to explain why a general analogy “ ” implies that paraphrases . We show that these conditions are in fact equivalent by reinterpreting paraphrases as word transformations.

Word Transformation

From above, the paraphrase of a word set by a word can be thought of as drawing a semantic equivalence between and . Alternatively, we can choose a particular word $w$ in $\mathcal{W}$ and view the paraphrase as indicating words (i.e. all others in , denoted ) that, when added to , make it “more like” – or transform it to – . For example, the paraphrase of by can be interpreted as a word transformation from to by adding . In effect, the added words narrow the context. More precisely, they alter the distribution of context words found around to more closely align with that of . Denoting a paraphrase by , we can represent this as:

in which the paraphrase can be seen as the “glue” in a relationship between, or rather, from to . Thus, if and , to say “ paraphrases ” is equivalent to saying “there exists a word transformation from to by adding ”. To be clear, nothing changes other than perspective.

Extending the concept to paraphrases between word sets , , we can choose any , and view the paraphrase as defining a relationship between and in which is added to and to :

This is not a word transformation as above, since it lacks the same notion of direction from to . To remedy this, rather than considering the words in as being added to , we consider them subtracted from (hence the naming convention):

Where added words narrow context, subtracted words can be thought of as broadening the context.

We say there exists a word transformation from word to word , with transformation parameters , iff .

Intuition

The intuition behind word transformations mirrors simple algebra, e.g. 8 is made equivalent to 5 by adding 3 to the right, or subtracting 3 from the left. Analogously, with paraphrasing as a measure of equivalence, we can identify words ( ) that when added to/subtracted from , make it equivalent to . In doing so, just as 3 describes the difference between 8 and 5 in the numeric example, we find words that describe the difference between and , or rather, how “ is to ”.

With our initial paraphrases, words could only be added to to describe its semantic difference to , limiting the tools available to discrete words in . Now, differences between other words can also be used offering a far richer toolkit, e.g. the difference between and can crudely be explained by, say, or $crown$, but can more closely be described by the difference between $woman$ and $queen$.

Interpreting Analogies

We can now mathematically interpret the language of an analogy:

We say is to as is to ” iff there exist that serve as transformation parameters to transform both to and to .

That is, within the analogy wording, each instance of “is to” refers to the parameters of a word transformation and “as” implies their equality. Thus the semantic differences within each word pair, as captured by the transformation parameters, are the same – fitting intuition and now defined explicitly.

So, an analogy is a pair of word transformations with common parameters , but what are those parameters? Fortunately, we need not search or guess. We show in the paper (Sec 6.4) that if an analogy holds, then any parameters that transform one word pair, e.g. to , must also transform the other pair. As such, we can chose , , which perfectly transform to since paraphrases exactly (note that ordering is irrelevant in paraphrases, e.g. , paraphrases , ). But, if the analogy holds, those same parameters must also transform to , meaning that paraphrases .

Thus, is to as is to if and only if paraphrases .

This completes the chain:

analogies are equivalent to word transformations with common transformation parameters that describe the common semantic difference;

those word transformations are equivalent to paraphrases, the first of which is rendered trivial under a particular choice of transformation parameters;

the second paraphrase leads to a geometric relationship between PMI vectors \eqref{eq:two}, subject to the accuracy of the paraphrase ( ) and dependence error terms ( ); and

under low-dimensional projection (induced by the loss function), the same geometric relationship manifests in word embeddings of analogies \eqref{eq:one}, as seen in word embeddings of W2V and Glove.

Returning to an earlier plot, we can now explain the “gap” in terms of paraphrase ( ) and dependence ) error terms, and understand why it is small, often smallest, for the word completing the analogy.





Related Work

Several other works aim to theoretically explain the analogy phenomenon, in particular:

Arora et al. (2016) propose a latent variable model for text generation that is claimed inter alia to explain analogies, however strong a priori assumptions are made about the arrangement of word vectors that we do not require. More recently, we have shown that certain results of this work contradict the relationship between W2V embeddings and PMI (Levy & Goldberg).

Gittens et al. (2017) introduce the idea of paraphrasing to explain analogies, from which we draw inspiration, but they include several assumptions that fail in practice, in particular that word frequencies follow a uniform distribution rather than their actual, highly non-uniform Zipf distribution.

Ethayarajh et al. (2019) look to show that word embeddings of analogies form parallelograms by considering the latter’s geometric properties. However, there are several issues: (i) that all points must be co-planar is assumed without explanation; (ii) that opposite sides must have similar direction is omitted (as such, a “bow-tie” shape satisfies their Lemma 1); and (iii) that opposite sides must have similar Euclidean distance is translated to a statistical relationship termed “csPMI” – unfortunately that translation is erroneous since it relies on embedding matrix being a scalar multiple ( ) of , which is false and, further, csPMI bears no connection to analogies or semantics.

Our work therefore provides the first end-to-end explanation for the geometric relationship between word embeddings observed for analogies.





Further Work

Two recent works build on this paper: