# on the difficulty of training recurrent neural networks

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

# Paper Summary: On The Difficulty of Training Recurrent Neural Networks

## Introduction

Training Recurrent Neural Networks (RNN) is difficult, one of the most prominent problem in training RNN has been the vanishing and exploding gradient problem which prevents nerual networks from learning and fitting the data. In this paper the authors propose a gradient norm cliping stragtegy to deal with the exploding gradients and a soft constraint for the vanishing gradients problem.

# Background

File:imgs/rnn
caption Recurrent Neural Network Unrolled in time

A generic recurrent neural network has the form:

$x_{t} = F(x_{t -1}, u_{t}, \theta)$

Where:

• $x_{t}$: is the state at time $t$
• $u_{t}$: is the input at time $t$
• $\theta$: is the parameters
• $F()$: is the function that represents a neuron

In the theoreical sections the authors made use of specific parameterization:

$x_{t} = W_{rec} \sigma(x_{t - 1}) + W_{in} u_{t} + b$

Where:

• $W_{rec}$: is the RNN weights matrix
• $\sigma$: is an element wise function
• $b$: is the bias
• $W_{in}$: is the input weights matrix

The following are gradients equations for using the Back Propagation Through Time (BPTT) algorithm, the authors rewrote the equations in order to highlight the exploding gradents problem:

$\frac{\delta \varepsilon}{\delta \theta} = \sum_{1 \leq t \leq T} \frac{\delta \varepsilon}{\delta \theta}$

$\frac{\delta \varepsilon_{t}}{\delta \theta} = \sum_{1 \leq k \leq T} \left( \frac{\delta \varepsilon_{t}}{\delta x_{t}} \frac{\delta x_{t}}{\delta x_{k}} \frac{\delta^{+} x_{k}}{\delta \theta} \right)$

$\frac{\delta x_{t}}{\delta x_{k}} = \prod_{t \leq i \leq k} \frac{\delta x_{i}}{\delta x_{i - 1}} = \prod_{t \leq i \leq k} W^{T}_{rec} \textit{diag}(\sigma^{\prime}(x_{i - 1}))$

Where:

• $\varepsilon_{t}$: is the error obtained from output at time $t$
• $\frac{\delta^{+} x_{k}}{\delta \theta}$: is the immediate partial derivative of state $x_{k}$

### From a dynamical systems perspective

Drawing from a dynamical systems perspective similiar to , dynamical systems theory states that as “$\theta$ changes the asymptotic behaviour changes smoothly almost every where except for cetian crucial points where drasic changes occur” , this is because crossing these bifurcation has the potential to cause gradients to explode .

The authors of this paper argues, however, that crossing these bifurcation does not guarantee a sudden chage in gradients if the model state is not in the basin of an attractor. On the other hand if the model is in the basin of an attractor, crossing boundaries between basins will cause the gradients to explode.

File:imgs/dynamic perspective
caption Bifurcation diagram of Single Hidden Unit RNN

The figure above depicts argument, where the x-axis is the parameter $b$ (bias) and the y-axis is the asymptotoc state $x_{\infty}$, the plot line is the momvent of the final point attractor $x_{\infty}$ as $b$ changes. What this figure represents is the presence of two attractors, one emerging from $b_1$ and another disappearing at $b_2$. The boundary between the two attractors is denoted with the dashed line, where the blue filled circles is Doya’s (1993) original hypothesis of exploding gradients, where a small change in $\theta$ could (50% chance) cause $x$ to change suddenly. Where as the unfilled green circles represents Pascanu’s (2013) extension of Doya’s hypothesis, where if the model is in the boundary range at time $0$, a small change in $\theta$ would result in a sudden large change in $x_{t}$.

### From a geometric perspective

Aside from a dynamical systems prospective to exploding and vanishing graidents, the authors also considered a geometric perspective, where a simple one hidden unit RNN was cosidered.

File:imgs/geometric perspective
caption Error Loss surface of single hidden unit RNN

Reusing the RNN state equation the authors defined:

$x_{t} = W_{rec} \sigma(x_{t - 1}) + W_{in} u_{t} + b$

By assuming no input, with $b = 0$ and initial state $x_{0}$, the equation simplifies to:

$x_{t} = W_{rec}^{t} x_{0}$

Differentiating the above equation to the first and second order would give:

$\frac{\delta x_{t}}{\delta \omega} = t W_{rec}^{t - 1} x_{0}$

$\frac{\delta^{2} x_{t}}{\delta \omega^{2}} = t (t - 1) W_{rec}^{t - 2} x_{0}$

Which implies if the first order derivative explodes so will the second derivative. This means when the Stochastic Gradient decent (SGD) approaches the loss error surface and attempts to step into it, it will be deflected away, possibly hindering the learning process. (See figure above).

# Dealing with Exploding and Vanishing Gradient

## Related Work

• L1/L2 Regularization: Helps with exploding gradients, but limits the RNN to a single point attractor at the origin, and prevents the model to learn generator models or exhibit long term memory traces.
• Teacher Forcing: Proposed by , performs a cross bifurcation boundary if the model does not exhibit asymptotic behavior towards a desired target. This assumes the user knows what the behaviour might look like or how to intialize the model to reduce exploding gradients.
• LTSM: The Long-Short Term Memory architecture by is an attempt to deal with the vanishing gradient problem by using a linear unit that feedbacks to itself with a weight of $1$. This solution however does not deal with the exploding gradient
• Hessian-Free optimizer with structural damping: Proposed by addresses the vanishing and exploding gradient problem. reasons that this approach solves the vanishing gradient problem because of the high dimensionality of the spaces gives rise to a high probability for the long term components to be orthogonal to short term components. Additionally for exploding gradient the curvature of the gradient is taken into account.
• Echo State Networks: avoids the exploding and vanishing gradient problem by not learning the input and recurrent weights, they are instead hand crafted distributions that prevent information from getting loss, since a spectral radius for the recurrent weights matrix is usually smaller than 1.

The intuition of this gradient clipping algorithm is simple, obtain the norm of the gradients, if it is larger than the set threshold then scale the gradients by a constant defined as the treshold divided by the norm of gradients. suggests using a threshold value from half to ten times the norm.

The vanishing gradient regularizer is as follows:

$\Omega = \sum_{k} \Omega_{k} = \sum_{k} \left( \frac{ \| \frac{\delta \varepsilon}{\delta x_{k + 1}} \frac{\delta x_{k + 1}}{\delta x_{k}} \| } { \| \frac{\delta \varepsilon}{\delta x_{k + 1}} \| } - 1 \right)^{2}$

The vanishing gradient problem occurs when at time $t$ the inputs $u$ may be irrelevant and noisy and the network starts to learn to ignore them. However this is not desirable as the model will end up not learning anything. The authors found that the sensitivity to all inputs $u_{t} \dots u_{k}$ could be increased by increasing the norm of $\frac{\delta x_t}{\delta x_{t}}$. This imples that in order to increses the $\frac{\delta x_t}{\delta x_{t}}$ norm the error must remain large, this however would prevent the model from converging, thus the authors argue a regularizer is a more natural choice. The regularizer is a soft constraint that forces the Jacobian matrices $\frac{\delta x_{k + 1}}{\delta x_{k}}$ to preserve norm in the direction of the error $\frac{\delta \varepsilon}{\delta x_{k + 1}}$.

# Experimental Results

## The Temporal Order Problem

The authors repeated the temporal order problem as the prototypical pathological problem for validating the cliping and regularizer devised. The temporal order problem involves generating a long sequence of discrete symbols, and at the beginning an $A$ or a $B$ symbol is placed at the beginning and middle of the sequence. The task is to correctly classify the order of $A$ and $B$ at the end of the sequence.

Three different RNN intializations were performed for the experiment:

• sigmoid unit network: $W_{rec}, W_{in}, W_{out} \sim \mathcal{N}(0, 0.01)$
• basic tanh unit network: $W_{rec}, W_{in}, W_{out} \sim \mathcal{N}(0, 0.1)$
• smart tanh unit network: $W_{rec}, W_{in}, W_{out} \sim \mathcal{N}(0, 0.01)$

Of the three RNN networks three different optimizer configurations were used:

• MSGD: Mini-batch Stochastic Gradient Decent
• MSGD-C: MSGD with Gradient Clipping
• MSGD-CR: MSGD-C with Regularization

• $b = b_{out} = 0$
• 50 hidden neurons
• constant learning rate of 0.01
• clipping threshold of 1 (only for MSGD-C and MSGD-CR)
• regularization weight of 4 (MSGD-CR)

The experiment was performed 5 times, from the figure below we can observe the importance of gradient cliping and the regularizer, in all cases the combination of the two methods yielded the best results regardless of which unit network was used. Furthermore this experiment provided empirical evidence that exploding graidents correlates to tasks that require long memory traces, as can be seen as the sequence length of the problem increases clipping and regularization becomes more important.

image [fig:experimentalresults]

## Other Pathological Problems

The author repeated other pathological problems from . The results are listed below:

image [fig:experimentalresults2]

# Summary

The paper explores two different perspectives in explaining the exploding and vanishing gradient problems in training RNNs via the dynamical systems and geometric approach. The authors devised methods to mitigate the corresponding problems by introducing a gradient clipping and a gradient vanishing regularizer, their experimental results show that in all cases except for the Penn Treebank dataset, that cliping and regularizer has bested the state of the art for RNN in their respective experiment performances.