Representations of Words and Phrases and their Compositionality

From statwiki
Revision as of 03:26, 21 November 2018 by Y2299zha (talk | contribs) (Empirical Results)
Jump to: navigation, search

Representations of Words and Phrases and their Compositionality is a popular paper published by the Google team led by Tomas Mikolov in 2013. It is known for its impact in the field of Natural Language Processing and the techniques described below are till in practice today.

Presented by

  • F. Jiang
  • J. Hu
  • Y. Zhang


The Skip-gram model is NLP method where a given word ("target") is fed into the model which outputs a vector indicating the probability of finding certain words in the immediate context ("surroundings") of the target word. Thus, words or phrases that appear together in the training sample with more regularity are deemed to have similar contexts and will result in a model generating higher output probabilities for one or the other. For example, inputting the word "light" will probably result in an output vector with high values for the words "show" and "bulb".

skip gen.PNG

Skip-gram requires a pre-specified vocabulary of words or phrases containing all possible target and context words to work.

Skip Gram Model

Hierarchical Softmax

Negative Sampling

Using the Skip-gram model, for each input word inside a 1M dictionary, we are adjusting 1M weights on the output layer. This can be very slow. Negative Sampling solve this problem.

Subsampling of Frequent Words

Empirical Results

To evaluate the results of these optimization, Mikolov and Al. used an internal dataset at Google. This dataset contains 1 billions. By removing all workings which occured less than 5 times, dataset size dropped to 692K words. Two type of data analogies where looked at: syntactic and semantic analogies. Syntactic analogies is when two words have the same meaning but describe two different things (e.g. “quick” : “quickly” :: “slow” : “slowly”). Semantic is when two pairs of words have the same vector meaning. For example, “Berlin” : “Germany” and “Paris” : “France” are semantic analogies.

wordembedding empiricalresults.png

wordembedding comparisons.png


[1] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean (2013). Distributed Representations of Words and Phrases and their Compositionality. arXiv:1310.4546.

[2] McCormick, C. (2017, January 11). Word2Vec Tutorial Part 2 - Negative Sampling. Retrieved from