stat946w18/Tensorized LSTMs: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 18: Line 18:
= A Quick Introduction to RNN and LSTM =
= A Quick Introduction to RNN and LSTM =


We consider the time-series prediction task of producing a desired output yt at each time-step t∈ {1, ..., T} given an observed input sequence <math>x1: t = {x_1,x_2, ···, x_t}</math>, where <math>x_t∈R^R</math> and <math>y_t∈R^S</math> are vectors. RNN learns how to use a hidden state vector <math>h_t ∈ R^M</math> to encapsulate the relevant features of the entire input history x1:t (indicates all inputs from to initial time-step to final step before predication - illustration given below) up to time-step t.
We consider the time-series prediction task of producing a desired output <math>y_t</math> at each time-step t∈ {1, ..., T} given an observed input sequence <math>x1: t = {x_1,x_2, ···, x_t}</math>, where <math>x_t∈R^R</math> and <math>y_t∈R^S</math> are vectors. RNN learns how to use a hidden state vector <math>h_t ∈ R^M</math> to encapsulate the relevant features of the entire input history x1:t (indicates all inputs from to initial time-step to final step before predication - illustration given below) up to time-step t.


\begin{align}
\begin{align}

Revision as of 10:44, 25 March 2018

Presented by

Chen, Weishi(Edward)

Introduction

Long Short-Term Memory (LSTM) is a popular approach to boosting the ability of Recurrent Neural Networks to store longer term temporal information. The capacity of an LSTM network can be increased by widening and adding layers (illustrations will be provided later).


However, usually the LSTM model introduces additional parameters, while LSTM with additional layers and wider layers increases the time required for model training and evaluation. As an alternative, the paper <Wider and Deeper, Cheaper and Faster: Tensorized LSTMs for Sequence Learning> has proposed a model based on LSTM call the Tensorized LSTM in which the hidden states are represented by tensors and updated via a cross-layer convolution.

  • By increasing the tensor size, the network can be widened efficiently without additional parameters since the parameters are shared across different locations in the tensor
  • By delaying the output, the network can be deepened implicitly with little additional runtime since deep computations for each time step are merged into temporal computations of the sequence.


Also, the paper has presented presented experiments conducted on five challenging sequence learning tasks show the potential of the proposed model.

A Quick Introduction to RNN and LSTM

We consider the time-series prediction task of producing a desired output [math]\displaystyle{ y_t }[/math] at each time-step t∈ {1, ..., T} given an observed input sequence [math]\displaystyle{ x1: t = {x_1,x_2, ···, x_t} }[/math], where [math]\displaystyle{ x_t∈R^R }[/math] and [math]\displaystyle{ y_t∈R^S }[/math] are vectors. RNN learns how to use a hidden state vector [math]\displaystyle{ h_t ∈ R^M }[/math] to encapsulate the relevant features of the entire input history x1:t (indicates all inputs from to initial time-step to final step before predication - illustration given below) up to time-step t.

\begin{align} h_{t-1}^{cat} = [x_t, h_{t-1}] \hspace{2cm} (1) \end{align}

Where [math]\displaystyle{ h_{t-1}^{cat} ∈R^{R+M} }[/math] is the concatenation of the current input [math]\displaystyle{ x_t }[/math] and the previous hidden state [math]\displaystyle{ h_{t−1} }[/math], which expands the dimensionality of intermediate information.

The update of the hidden state ht is defined as:

\begin{align} a_{t} =h_{t-1}^{cat} W^h + b^h \hspace{2cm} (2) \end{align}

and

\begin{align} h_t = φ(a_t) \hspace{2cm} (3) \end{align}