distributed Representations of Words and Phrases and their Compositionality

From statwiki
Revision as of 23:48, 18 November 2015 by Amirlk (talk | contribs) (Created page with "= Introduction = This paper presents several extensions of the Skip-gram model intriduced by Mikolov et al. [8]. Skip-gram model is an efficient method for learning highquality ...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Introduction

This paper presents several extensions of the Skip-gram model intriduced by Mikolov et al. [8]. Skip-gram model is an efficient method for learning highquality vector representations of words from large amounts of unstructured text data. The word representations computed using this model are very interesting because the learned vectors explicitly encode many linguistic regularities and patterns. Somewhat surprisingly, many of these patterns can be represented as linear translations. For example, the result of a vector calculation vec(“Madrid”) - vec(“Spain”) + vec(“France”) is closer to vec(“Paris”) than to any other word vector. The authors of this paper show that subsampling of frequent words during training results in a significant speedup and improves accuracy of the representations of less frequent words. In addition, a simplified variant of Noise Contrastive Estimation (NCE) [4] for training the Skip-gram model is presented that results in faster training and better vector representations for frequent words, compared to more complex hierarchical softmax that was used in the prior work [8]. It also shown that a non-obvious degree of language understanding can be obtained by using basic mathematical operations on the word vector representations. For example, vec(“Russia”) + vec(“river”) is close to vec(“Volga River”), and vec(“Germany”) + vec(“capital”) is close to vec(“Berlin”).

The Skip-gram Model