stat946f15/Sequence to sequence learning with neural networks: Difference between revisions

From statwiki
Jump to navigation Jump to search
No edit summary
Line 13: Line 13:


= Model =
= Model =
=== Long Short-Term Memory Recurrent Neural Network ===
=== Long Short-Term Memory Recurrent Neural Network (LSTM) ===
Recurrent neural networks are a variation of deep neural networks that are capable of storing information about previous hidden states in special memory layers. Unlike feed forward neural networks that take in a single fixed length vector input and output a fixed length vector output, recurrent neural networks can take in a sequence of fixed length vectors as input because of their ability to store information and maintain a connection between inputs through this memory layer. By comparison, previous inputs would have no impact on current output for feed forward neural networks whereas they can impact current input in a recurrent neural network.
Recurrent neural networks are a variation of deep neural networks that are capable of storing information about previous hidden states in special memory layers. Unlike feed forward neural networks that take in a single fixed length vector input and output a fixed length vector output, recurrent neural networks can take in a sequence of fixed length vectors as input because of their ability to store information and maintain a connection between inputs through this memory layer. By comparison, previous inputs would have no impact on current output for feed forward neural networks whereas they can impact current input in a recurrent neural network.


This form of input fits naturally with language translation since sentences are sequences of words and many problems regarding representing variable length sentences as fixed length vectors can be avoided.
 
This form of input fits naturally with language translation since sentences are sequences of words and many problems regarding representing variable length sentences as fixed length vectors can be avoided. However, training recurrent neural networks to learn long time lag dependencies where inputs many time steps back can heavily influence current output is difficult and generally results in exploding or vanishing gradients. A variation of recurrent neural networks, long short-term memory neural network, was used instead for this paper as they do not suffer as much from vanishing gradient problem.
 
 
The purpose of the LSTM in this case is to estimate the conditional probability of the output sequence, <math>\,(y_1,...,y_{T'})</math>, based on the input sequence, <math>\,(x_1,...,x_{T})</math>, where <math>\,T</math> does not have to equal <math>\,T'</math>
 
<math>\,p(y_1,...,y_{T'})</math>


=== Input and Output Data Transformation ===
=== Input and Output Data Transformation ===


=== Error Scoring ===
 
 
=== Training Score ===


= Training and Results =
= Training and Results =

Revision as of 17:50, 16 October 2015

Introduction

The emergence of the Internet and other modern technology has greatly increased people's ability to communicate across vast distances and barriers. However, there still remains the fundamental barrier of languages and as anyone who has attempted to learn a new language can attest, it takes tremendous amount of work to learn more than one language past childhood. The ability to efficiently and quickly translate between languages would then be of great importance. This is an extremely difficult problem however as languages can have varying grammar and context always plays an important role. For example, the word "back" means entirely different things in the following two sentences,

I am in the back of the car.

My back hurts.

Deep neural networks have proven to be very capable in solving some other difficult problems such as reproducing sound waves from videos (need source) and a sufficiently complex neural network might provide an excellent solution in this case as well. The purpose of the paper is to apply multi-layer long short-term memory neural networks to this machine language translation problem and assess the accuracy in translation for this approach.

Model

Long Short-Term Memory Recurrent Neural Network (LSTM)

Recurrent neural networks are a variation of deep neural networks that are capable of storing information about previous hidden states in special memory layers. Unlike feed forward neural networks that take in a single fixed length vector input and output a fixed length vector output, recurrent neural networks can take in a sequence of fixed length vectors as input because of their ability to store information and maintain a connection between inputs through this memory layer. By comparison, previous inputs would have no impact on current output for feed forward neural networks whereas they can impact current input in a recurrent neural network.


This form of input fits naturally with language translation since sentences are sequences of words and many problems regarding representing variable length sentences as fixed length vectors can be avoided. However, training recurrent neural networks to learn long time lag dependencies where inputs many time steps back can heavily influence current output is difficult and generally results in exploding or vanishing gradients. A variation of recurrent neural networks, long short-term memory neural network, was used instead for this paper as they do not suffer as much from vanishing gradient problem.


The purpose of the LSTM in this case is to estimate the conditional probability of the output sequence, [math]\displaystyle{ \,(y_1,...,y_{T'}) }[/math], based on the input sequence, [math]\displaystyle{ \,(x_1,...,x_{T}) }[/math], where [math]\displaystyle{ \,T }[/math] does not have to equal [math]\displaystyle{ \,T' }[/math]

[math]\displaystyle{ \,p(y_1,...,y_{T'}) }[/math]

Input and Output Data Transformation

Training Score

Training and Results

Training Method

Results