LightRNN: Memory and Computation-Efficient Recurrent Neural Networks: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 6: Line 6:


= Natural Language Processing (NLP) Using Recurrent Neural Network (RNN) =
= Natural Language Processing (NLP) Using Recurrent Neural Network (RNN) =
In language modelling, researchers used to represent words by arbitrary codes, such as “Id143” is the code for “dog” (“Vector Representations of Words,” 2017). Such coding of words is completely random, and it loses the meaning of the words and (more importantly) connection with other words. Nowadays, one-hot representation of words is commonly used, in which a word is represented by a vector of numbers and the dimension of the vector is corresponding to the size of the word. In RNN, all words in the vocabulary are coded using one-hot representation and then mapped to an embedding vector (Li, Qin, Yang, Hu, & Liu, 2016). Such embedding vector is “a continuous vector space where semantically similar words are mapped to nearby points” (“Vector Representations of Words” 2017, para. 6). Popular RNN structure used in NLP task is long short-term memory (LSTM). In order to predict the probability of the next word, the last hidden layer of the network needs to calculate the probability distribution of all other words in the vocabulary. Lastly, an activation function (commonly, softmax function) is used to select the next word with the highest probability.
This method has 3 major limitations:
1. Memory Constraint
When input vocabulary contains enormous amount of unique words, which is very common in various NLP tasks, the size of model becomes very large. This means the number of trainable parameters is very big, which makes it difficult to fit such model on a regular GPU device.
2. Computationally Heavy to Train
As previously mentioned, probability distribution of all other words in the vocabulary needs to be computed to determine what predicted word it would be. When the size of the vocabulary is large, such calculation can be computationally heavy.
3. Low Generalizability 
Due to the memory and computation-consuming process of RNN applied in NLP tasks, mobile devices cannot usually handle such algorithm, which makes it undesirable and limits its usage.


= LightRNN Structure =
= LightRNN Structure =

Revision as of 11:18, 27 October 2017

Introduction

The study of natural language processing has been around for more than fifty years. It begins in 1950s which the specific field of natural language processing (NLP) is still embedded in the subject of linguistics (Hirschberg & Manning, 2015). After the emergence of strong computational power, computational linguistics began to evolve and gradually branch out to various applications in NLP, such as text classification, speech recognition and question answering (Brownlee, 2017). Computational linguistics or natural language processing is usually defined as “subfield of computer science concerned with using computational techniques to learn, understand, and produce human language content” (Hirschberg & Manning, 2015, p. 261).

With the development in deep neural networks, one type of neural network, namely recurrent neural networks (RNN) have preformed significantly well in many natural language processing tasks. The reason is that nature of RNN takes into account the past inputs as well as the current input without resulting in vanishing or exploding gradient. More detail of how RNN works in the context of NLP will be discussed in the section of NLP using RNN. However, one limitation of RNN used in NLP is its enormous size of input vocabulary. This will result in a very complex RNN model with too many parameters to train and makes the training process both time and memory-consuming. This serves as the major motivation for this paper’s authors to develop a new technique utilized in RNN, which is particularly efficient at processing large size of vocabulary in many NLP tasks, namely LightRNN.

Natural Language Processing (NLP) Using Recurrent Neural Network (RNN)

In language modelling, researchers used to represent words by arbitrary codes, such as “Id143” is the code for “dog” (“Vector Representations of Words,” 2017). Such coding of words is completely random, and it loses the meaning of the words and (more importantly) connection with other words. Nowadays, one-hot representation of words is commonly used, in which a word is represented by a vector of numbers and the dimension of the vector is corresponding to the size of the word. In RNN, all words in the vocabulary are coded using one-hot representation and then mapped to an embedding vector (Li, Qin, Yang, Hu, & Liu, 2016). Such embedding vector is “a continuous vector space where semantically similar words are mapped to nearby points” (“Vector Representations of Words” 2017, para. 6). Popular RNN structure used in NLP task is long short-term memory (LSTM). In order to predict the probability of the next word, the last hidden layer of the network needs to calculate the probability distribution of all other words in the vocabulary. Lastly, an activation function (commonly, softmax function) is used to select the next word with the highest probability. This method has 3 major limitations:

1. Memory Constraint

When input vocabulary contains enormous amount of unique words, which is very common in various NLP tasks, the size of model becomes very large. This means the number of trainable parameters is very big, which makes it difficult to fit such model on a regular GPU device.

2. Computationally Heavy to Train

As previously mentioned, probability distribution of all other words in the vocabulary needs to be computed to determine what predicted word it would be. When the size of the vocabulary is large, such calculation can be computationally heavy.

3. Low Generalizability

Due to the memory and computation-consuming process of RNN applied in NLP tasks, mobile devices cannot usually handle such algorithm, which makes it undesirable and limits its usage.

LightRNN Structure

Part I: 2-Component Shared Embedding

Based on the diagram below, the following formulas are used:

Let n = dimension/length of a row input vector, m = dimension/length of a column input vector:

row vector [math]\displaystyle{ x^{r}_{t-1} \in \mathbb{R}^n }[/math]
column vector [math]\displaystyle{ x^{c}_{t-1} \in \mathbb{R}^m }[/math]
[math]\displaystyle{ x^{c}_{t-1} }[/math]

Part II: How 2C Shared Embedding is Used in LightRNN

Part III: Bootstrap for Word Allocation

  1. one
  2. two


LightRNN Example

Remarks