Difference between revisions of "learning Phrase Representations"

From statwiki
Jump to: navigation, search
(RNN Encoder–Decoder)
Line 4: Line 4:
  
 
= RNN Encoder–Decoder =
 
= RNN Encoder–Decoder =
 +
 +
In this paper, researchers propose a novel neural network architecture that learns to encode a variable-length sequence into a fixed-length vector representation and to decode a given fixed-length vector representation back into a variable-length sequence. From a probabilistic perspective, this new model is a general method to learn the conditional distribution over a variable-length sequence conditioned on yet another variable-length sequence, e.g. <math>p(y_1, . . . , y_{T'} | x_1, . . . , x_T )</math>, where one should note that the input and output sequence lengths <math>T</math> and <math>T'</math> may differ.
  
 
<center>
 
<center>
[[File:encdec1.png | frame | center |Fig 1. Comparison of linear convolution layer and mlpconv layer ]]
+
[[File:encdec1.png |frame | center |Fig 1. An illustration of the proposed RNN Encoder–Decoder. ]]
 
</center>
 
</center>
 +
 +
The encoder is an RNN that reads each symbol of an input sequence x sequentially. As it reads each symbol, the hidden state of the RNN changes.
 +
 +
::<math> h_t=f(h_{t-1},x_t) </math> <br/>
 +
 +
 +
After reading the end of the sequence (marked by an end-of-sequence symbol), the hidden state of the RNN is a summary c of the whole input sequence.
 +
The hidden state of the decoder at time t is computed by
 +
 +
::<math> h_t=f(h_{t-1},y_{t-1},\mathbf{c}) </math> <br/>
 +
 +
and  similarly, the conditional distribution of the next symbol is
 +
 +
::<math> P(y_t|y_{t-1},y_{t-2},\cdots,y_1,\mathbf{c})=g(h_t,,y_{t-1},\mathbf{c})</math> <br/>

Revision as of 20:16, 16 November 2015

Introduction

In this paper, Cho et al. propose a novel neural network model called RNN Encoder–Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder–Decoder as an additional feature in the existing log-linear model.

RNN Encoder–Decoder

In this paper, researchers propose a novel neural network architecture that learns to encode a variable-length sequence into a fixed-length vector representation and to decode a given fixed-length vector representation back into a variable-length sequence. From a probabilistic perspective, this new model is a general method to learn the conditional distribution over a variable-length sequence conditioned on yet another variable-length sequence, e.g. [math]p(y_1, . . . , y_{T'} | x_1, . . . , x_T )[/math], where one should note that the input and output sequence lengths [math]T[/math] and [math]T'[/math] may differ.

File:encdec1.png
Fig 1. An illustration of the proposed RNN Encoder–Decoder.

The encoder is an RNN that reads each symbol of an input sequence x sequentially. As it reads each symbol, the hidden state of the RNN changes.

[math] h_t=f(h_{t-1},x_t) [/math]


After reading the end of the sequence (marked by an end-of-sequence symbol), the hidden state of the RNN is a summary c of the whole input sequence. The hidden state of the decoder at time t is computed by

[math] h_t=f(h_{t-1},y_{t-1},\mathbf{c}) [/math]

and similarly, the conditional distribution of the next symbol is

[math] P(y_t|y_{t-1},y_{t-2},\cdots,y_1,\mathbf{c})=g(h_t,,y_{t-1},\mathbf{c})[/math]