learning Phrase Representations: Difference between revisions
Line 24: | Line 24: | ||
::<math> P(y_t|y_{t-1},y_{t-2},\cdots,y_1,\mathbf{c})=g(h_t,,y_{t-1},\mathbf{c})</math> <br/> | ::<math> P(y_t|y_{t-1},y_{t-2},\cdots,y_1,\mathbf{c})=g(h_t,,y_{t-1},\mathbf{c})</math> <br/> | ||
The two components of the proposed RNN Encoder–Decoder are jointly trained to maximize the conditional log-likelihood | |||
::<math> \max_{\mathbf{\theta}}\frac{1}{N}\sum_{n=1}^{N}\log p_\mathbf{\theta}(y_n|x_n) </math> <br/> |
Revision as of 20:32, 16 November 2015
Introduction
In this paper, Cho et al. propose a novel neural network model called RNN Encoder–Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder–Decoder as an additional feature in the existing log-linear model.
RNN Encoder–Decoder
In this paper, researchers propose a novel neural network architecture that learns to encode a variable-length sequence into a fixed-length vector representation and to decode a given fixed-length vector representation back into a variable-length sequence. From a probabilistic perspective, this new model is a general method to learn the conditional distribution over a variable-length sequence conditioned on yet another variable-length sequence, e.g. [math]\displaystyle{ p(y_1, . . . , y_{T'} | x_1, . . . , x_T ) }[/math], where one should note that the input and output sequence lengths [math]\displaystyle{ T }[/math] and [math]\displaystyle{ T' }[/math] may differ.
The encoder is an RNN that reads each symbol of an input sequence x sequentially. As it reads each symbol, the hidden state of the RNN changes.
- [math]\displaystyle{ h_t=f(h_{t-1},x_t) }[/math]
- [math]\displaystyle{ h_t=f(h_{t-1},x_t) }[/math]
After reading the end of the sequence (marked by an end-of-sequence symbol), the hidden state of the RNN is a summary [math]\displaystyle{ \mathbf{c} }[/math] of the whole input sequence.
The decoder of the proposed model is another RNN which is trained to generate the output sequence by predicting the next symbol [math]\displaystyle{ y_t }[/math] given the hidden state[math]\displaystyle{ h_t }[/math] . However, as shown in figure 1, both [math]\displaystyle{ y_t }[/math] and [math]\displaystyle{ h_t }[/math] are also conditioned on [math]\displaystyle{ y_{t-1} }[/math] and on the summary [math]\displaystyle{ \mathbf{c} }[/math] of the input sequence. Hence, the hidden state of the decoder at time [math]\displaystyle{ t }[/math] is computed by,
- [math]\displaystyle{ h_t=f(h_{t-1},y_{t-1},\mathbf{c}) }[/math]
- [math]\displaystyle{ h_t=f(h_{t-1},y_{t-1},\mathbf{c}) }[/math]
and similarly, the conditional distribution of the next symbol is
- [math]\displaystyle{ P(y_t|y_{t-1},y_{t-2},\cdots,y_1,\mathbf{c})=g(h_t,,y_{t-1},\mathbf{c}) }[/math]
- [math]\displaystyle{ P(y_t|y_{t-1},y_{t-2},\cdots,y_1,\mathbf{c})=g(h_t,,y_{t-1},\mathbf{c}) }[/math]
The two components of the proposed RNN Encoder–Decoder are jointly trained to maximize the conditional log-likelihood
- [math]\displaystyle{ \max_{\mathbf{\theta}}\frac{1}{N}\sum_{n=1}^{N}\log p_\mathbf{\theta}(y_n|x_n) }[/math]
- [math]\displaystyle{ \max_{\mathbf{\theta}}\frac{1}{N}\sum_{n=1}^{N}\log p_\mathbf{\theta}(y_n|x_n) }[/math]