# Presented by

Chen, Weishi(Edward)

# Introduction

Long Short-Term Memory (LSTM) is a popular approach to boosting the ability of Recurrent Neural Networks to store longer term temporal information. The capacity of an LSTM network can be increased by widening and adding layers (illustrations will be provided later).

However, usually the LSTM model introduces additional parameters, while LSTM with additional layers and wider layers increases the time required for model training and evaluation. As an alternative, the paper <Wider and Deeper, Cheaper and Faster: Tensorized LSTMs for Sequence Learning> has proposed a model based on LSTM call the Tensorized LSTM in which the hidden states are represented by tensors and updated via a cross-layer convolution.

• By increasing the tensor size, the network can be widened efficiently without additional parameters since the parameters are shared across different locations in the tensor
• By delaying the output, the network can be deepened implicitly with little additional runtime since deep computations for each time step are merged into temporal computations of the sequence.

Also, the paper has presented presented experiments conducted on five challenging sequence learning tasks show the potential of the proposed model.

# A Quick Introduction to RNN and LSTM

We consider the time-series prediction task of producing a desired output $y_t$ at each time-step t∈ {1, ..., T} given an observed input sequence $x1: t = {x_1,x_2, ···, x_t}$, where $x_t∈R^R$ and $y_t∈R^S$ are vectors. RNN learns how to use a hidden state vector $h_t ∈ R^M$ to encapsulate the relevant features of the entire input history x1:t (indicates all inputs from to initial time-step to final step before predication - illustration given below) up to time-step t.

\begin{align} h_{t-1}^{cat} = [x_t, h_{t-1}] \hspace{2cm} (1) \end{align}

Where $h_{t-1}^{cat} ∈R^{R+M}$ is the concatenation of the current input $x_t$ and the previous hidden state $h_{t−1}$, which expands the dimensionality of intermediate information.

The update of the hidden state ht is defined as:

\begin{align} a_{t} =h_{t-1}^{cat} W^h + b^h \hspace{2cm} (2) \end{align}

and

\begin{align} h_t = \Phi(a_t) \hspace{2cm} (3) \end{align}

$W^h∈R^(R+M)xM$ guarantees each hidden status provided by the previous step is of dimension M. $a_t ∈R^M$ the hidden activation, and φ(·) the element-wise "tanh" function. Finally, the output $y_t$ at time-step t is generated by:

\begin{align} y_t = \varphi(h_{t}^{cat} W^y + b^y) \hspace{2cm} (4) \end{align}

where $W^y∈R^{M×S}$ and $b^y∈R^S$, and $\varphi(·)$ can be any differentiable function, notes that the "Phi" is the element-wise function which produces some non-linearity and further generates another hidden status, while the "Curly Phi" is applied to generates the output

However, one shortfall of RNN is the vanishing/exploding gradients. This shortfall is more significant especially when constructing long-range dependencies models. One alternative is to apply LSTM (Long Short-Term Memories), LSTMs alleviate these problems by employing memory cells to preserve information for longer, and adopting gating mechanisms to modulate the information flow. Since LSTM is successfully in sequence models, it is natural to consider how to increase the complexity of the model to accommodate more complex analytical needs.

# Structural Measurement of Sequential Model

We can consider the capacity of a network consists of two components: the width (the amount of information handled in parallel) and the depth (the number of computation steps).

A way to widen the LSTM is to increase the number of units in a hidden layer; however, the parameter number scales quadratically with the number of units. To deepen the LSTM, the popular Stacked LSTM (sLSTM) stacks multiple LSTM layers. The drawback of sLSTM, however, is that runtime is proportional to the number of layers and information from the input is potentially lost (due to gradient vanishing/explosion) as it propagates vertically through the layers. This paper introduced a way to both widen and deepen the LSTM whilst keeping the parameter number and runtime largely unchanged. In summary, we make the following contributions:

(a) Tensorize RNN hidden state vectors into higher-dimensional tensors, to enable more flexible parameter sharing and can be widened more efficiently without additional parameters.

(b) Based on (a), merge RNN deep computations into its temporal computations so that the network can be deepened with little additional runtime, resulting in a Tensorized RNN (tRNN).

(c) We extend the tRNN to an LSTM, namely the Tensorized LSTM (tLSTM), which integrates a novel memory cell convolution to help to prevent the vanishing/exploding gradients.

# Method

Go through the methodology.

## Part 1: Tensorize RNN hidden State vectors

Definition: Tensorization is defined as the transformation or mapping of lower-order data to higher-order data. For example, the low-order data can be a vector, and the tensorized result is a matrix, a third-order tensor or a higher-order tensor. The ‘low-order’ data can also be a matrix or a third-order tensor, for example. In the latter case, tensorization can take place along one or multiple modes.

Optimization Methodology Part 1: It can be seen that in an RNN, the parameter number scales quadratically with the size of the hidden state. A popular way to limit the parameter number when widening the network is to organize parameters as higher-dimensional tensors which can be factorized into lower-rank sub-tensors that contain significantly fewer elements, which is is known as tensor factorization.

Optimization Methodology Part 2: Another common way to reduce the parameter number is to share a small set of parameters across different locations in the hidden state, similar to Convolutional Neural Networks (CNNs).

Effects: This widens the network since the hidden state vectors are in fact broadcast to interact with the tensorized parameters.

We adopt parameter sharing to cutdown the parameter number for RNNs, since compared with factorization, it has the following advantages:

(i) Scalability, the number of shared parameters can be set independent of the hidden state size

(ii) Separability, the information flow can be carefully managed by controlling the receptive field, allowing one to shift RNN deep computations to the temporal domain

We also explicitly tensorize the RNN hidden state vectors, since compared with vectors, tensors have a better:

(i) Flexibility, one can specify which dimensions to share parameters and then can just increase the size of those dimensions without introducing additional parameters

(ii) Efficiency, with higher-dimensional tensors, the network can be widened faster w.r.t. its depth when fixing the parameter number (explained later).

Illustration: For ease of exposition, we first consider 2D tensors (matrices): we tensorize the hidden state $h_t∈R^{M}$ to become $Ht∈R^{P×M}$, where P is the tensor size, and M the channel size. We locally-connect the first dimension of $H_t$ (which is P - the tensor size) in order to share parameters, and fully-connect the second dimension of $H_t$ (which is M - the channel size) to allow global interactions. This is analogous to the CNN which fully-connects one dimension (e.g., the RGB channel for input images) to globally fuse different feature planes. Also, if one compares $H_t$ to the hidden state of a Stacked RNN (sRNN) (see Figure Blow).

Then P is akin to the number of stacked hidden layers (vertical length in the graph), and M the size of each hidden layer (each white node in the graph). We start to describe our model based on 2D tensors, and finally show how to strengthen the model with higher-dimensional tensors.

## Part 2: Merging Deep Computations

Since an RNN is already deep in its temporal direction, we can deepen an input-to-output computation by associating the input $x_t$ with a (delayed) future output. In doing this, we need to ensure that the output $y_t$ is separable, i.e., not influenced by any future input $x_{t^{'}}$ $(t^{'}\gt t)$. Thus, we concatenate the projection of $x_t$ to the top of the previous hidden state $H_{t−1}$, then gradually shift the input information down when the temporal computation proceeds, and finally generate $y_t$ from the bottom of $H_{t+L−1}$, where L−1 is the number of delayed time-steps for computations of depth L.

An example with L= 3 is shown in Figure.

This is in fact a skewed sRNN (or tRNN without feedback). However, the method does not need to change the network structure and also allows different kinds of interactions as long as the output is separable; for example, one can increase the local connections and use feedback (shown in figure below), which can be beneficial for sRNNs (or tRNN).

In order to share parameters, we update $H_t$ using a convolution with a learnable kernel. In this manner we increase the complexity of the input-to-output mapping (by delaying outputs) and limit parameter growth (by sharing transition parameters using convolutions).

To examine the resulting model mathematically, let $H^{cat}_{t−1}∈R^{(P+1)×M}$ be the concatenated hidden state, and $p∈Z_+$ the location at a tensor. The channel vector $h^{cat}_{t−1, p }∈R^M$ at location p of $H^{cat}_{t−1}$ (the p-th channel of H) is defined as:

\begin{align} h^{cat}_{t-1, p} = x_t W^x + b^x \hspace{1cm} if p = 1 \hspace{1cm} (5) \end{align}

\begin{align} h^{cat}_{t-1, p} = h_{t-1, p-1} \hspace{1cm} if p > 1 \hspace{1cm} (6) \end{align}

where $W^x ∈ R^{R×M}$ and $b^x ∈ R^M$ (recall the dimension of input x is R). Then, the update of tensor $H_t$ is implemented via a convolution:

\begin{align} A_t = H^{cat}_{t-1} \circledast \{W^h, b^h \} \hspace{2cm} (7) \end{align}

\begin{align} H_t = \Phi{A_t} \hspace{2cm} (8) \end{align}

where $W^h∈R^{K×M^i×M^o}$ is the kernel weight of size K, with $M^i =M$ input channels and $M^o =M$ output channels, $b^h ∈ R^{M^o}$ is the kernel bias, $A_t ∈ R^{P×M^o}$ is the hidden activation, and $\circledast$ is the convolution operator. Since the kernel convolves across different hidden layers, we call it the cross-layer convolution. The kernel enables interaction, both bottom-up and top-down across layers. Finally, we generate $y_t$ from the channel vector $h_{t+L−1,P}∈R^M$ which is located at the bottom of $H_{t+L−1}$:

\begin{align} y_t = \varphi(h_{t+L−1}, _PW^y + b^y) \hspace{2cm} (9) \end{align}

Where $W^y ∈R^{M×S}$ and $b^y ∈R^S$. To guarantee that the receptive field of $y_t$ only covers the current and previous inputs x1:t. (Check the Skewed sRNN again below):

### Quick Summary of Set of Parameters

1. $W^x$ and $b_x$ connect input to the first hidden node

2. $W^h$ and $b_h$ convolute between layers

3. $W^y$ and $b_y$ produce output of each stages

## Part 3: Extending to LSTMs

Similar to standard RNN, to allow the tRNN (skewed sRNN) to capture long-range temporal dependencies, one can straightforwardly extend it to a tLSTM by replacing the tRNN tensors:

\begin{align} [A^g_t, A^i_t, A^f_t, A^o_t] = H^{cat}_{t-1} \circledast \{W^h, b^h \} \hspace{2cm} (10) \end{align}

\begin{align} [G_t, I_t, F_t, O_t]= [\Phi{(A^g_t)}, σ(A^i_t), σ(A^f_t), σ(A^o_t)] \hspace{2cm} (11) \end{align}

Which are pretty similar to tRNN case, the main differences can be observes for memory cells of tLSTM (Ct):

\begin{align} C_t= G_t \odot I_t + C_{t-1} \odot F_t \hspace{2cm} (12) \end{align}

\begin{align} H_t= \Phi{(C_t )} \odot O_t \hspace{2cm} (13) \end{align}

Summary of the terms:

1. $G_t$: Activation of new content

2. $I_t$: Input gate

3. $F_t$: Forget gate

4. $O_t$: Output gate

Then, see graph below for illustration:

To further evolve tLSTM, we invoke the Memory Cell Convolution to capture long-range dependencies from multiple directions, we additionally introduce a novel memory cell convolution, by which the memory cells can have a larger receptive field (figure provided below).

One can also dynamically generate this convolution kernel so that it is both time - and location-dependent, allowing for flexible control over long-range dependencies from different directions. Mathematically, it can be represented in with the following formulas:

\begin{align} [A^g_t, A^i_t, A^f_t, A^o_t, A^q_t] = H^{cat}_{t-1} \circledast \{W^h, b^h \} \hspace{2cm} (14) \end{align}

\begin{align} [G_t, I_t, F_t, O_t, Q_t]= [\Phi{(A^g_t)}, σ(A^i_t), σ(A^f_t), σ(A^o_t), ς(A^q_t)] \hspace{2cm} (15) \end{align}

\begin{align} W_t^c(p) = reshape(q_{t,p}, [K, 1, 1]) \hspace{2cm} (16) \end{align}

\begin{align} C_{t-1}^{conv}= C_{t-1} \circledast W_t^c(p) \hspace{2cm} (17) \end{align}

\begin{align} C_t= G_t \odot I_t + C_{t-1}^{conv} \odot F_t \hspace{2cm} (18) \end{align}

\begin{align} H_t= \Phi{(C_t )} \odot O_t \hspace{2cm} (19) \end{align}

where the kernel ${W^h, b^h}$ has additional <K> output channels to generate the activation $A^q_t ∈ R^{P×\lt K\gt }$ for the dynamic kernel bank $Q_t∈R^{P × \lt K\gt }$, $q_{t,p}∈R^{\lt K\gt }$ is the vectorized adaptive kernel at the location p of $Q_t$, and $W^c_t(p) ∈ R^{K×1×1}$ is the dynamic kernel of size K with a single input/output channel, which is reshaped from $q_{t,p}$. Note the paper also employed a softmax function ς(·) to normalize the channel dimension of $Q_t$. which can also stabilize the value of memory cells and help to prevent the vanishing/exploding gradients. An illustration is provided below to better illustrate the process:

# Results and Evaluation

Summary of list of models tLSTM family (may be useful later):

(a) sLSTM (baseline): the implementation of sLSTM with parameters shared across all layers.

(b) 2D tLSTM: the standard 2D tLSTM.

(c) 2D tLSTM–M: removing memory (M) cell convolutions from (b).

(d) 2D tLSTM–F: removing (–) feedback (F) connections from (b).

(e) 3D tLSTM: tensorizing (b) into 3D tLSTM.

(f) 3D tLSTM+LN: applying (+) Layer Normalization.

(g) 3D tLSTM+CN: applying (+) Channel Normalization.

### Performance Analysis

Fundaments: For each configuration, fix the parameter number and increase the tensor size to see if the performance of tLSTM can be boosted without increasing the parameter number. Can also investigate how the runtime is affected by the depth, where the runtime is measured by the average GPU milliseconds spent by a forward and backward pass over one timestep of a single example.

Dataset: The Hutter Prize Wikipedia dataset consists of 100 million characters taken from 205 different characters including alphabets, XML markups and special symbols. We model the dataset at the character-level, and try to predict the next character of the input sequence.