Difference between revisions of "stat946w18/Self Normalizing Neural Networks"

Introduction and Motivation

While neural networks have been making a lot of headway in improving benchmark results and narrowing the gap with human-level performance, success has been fairly limited to visual and sequential processing tasks through advancements in convolutional network and recurrent network structures. Most data science competitions outside of these domains are still outperformed by algorithms such as gradient boosting and random forests. The traditional (densely connected) feed-forward neural networks (FNNs) are rarely used competitively, and when they do win on rare occasions, they have very shallow network architectures with just up to four layers [10].

The authors, Klambauer et al., believe that what prevents FNNs from becoming more competitively useful is the inability to train a deeper FNN structure, which would allow the network to learn more levels of abstract representations. To have a deeper network, oscillations in the distribution of activations need to be kept under control so that stable gradients can be obtained during training. Several techniques are available to normalize activations, including batch normalization [6], layer normalization [1] and weight normalization [8]. These methods work well with CNNs and RNNs, but not so much with FNNs because backpropagating through normalization parameters introduces additional variance to the gradients, and regularization techniques like dropout further perturb the normalization effect. CNNs and RNNs are less sensitive to such perturbations, presumably due to their weight sharing architecture, but FNNs do not have such property, and thus suffer from high variance in training errors, which hinders learning. Furthermore, the aforementioned normalization techniques involve adding external layers to the model and can slow down computations.

Therefore, the authors were motivated to develop a new FNN implementation that can achieve the intended effect of normalization techniques that works well with stochastic gradient descent and dropout. Self-normalizing neural networks (SNNs) are based on the idea of scaled exponential linear units (SELU), a new activation function introduced in this paper, whose output distribution is proved to converge to a fixed point, thus making it possible to train deeper networks.

Notations

As the paper (primarily in the supplementary materials) comes with lengthy proofs, important notations are listed first.

Consider two fully-connected layers, let $x$ denote the inputs to the second layer, then $z = Wx$ represents the network inputs of the second layer, and $y = f(z)$ represents the activations in the second layer.

Assume that all $x_i$'s, $1 \leqslant i \leqslant n$, have mean $\mu := \mathrm{E}(x_i)$ and variance $\nu := \mathrm{Var}(x_i)$ and that each $y$ has mean $\widetilde{\mu} := \mathrm{E}(y)$ and variance $\widetilde{\nu} := \mathrm{Var}(y)$, then let $g$ be the set of functions that maps $(\mu, \nu)$ to $(\widetilde{\mu}, \widetilde{\nu})$.

For the weight vector $w$, $n$ times the mean of the weight vector is $\omega := \sum_{i = 1}^n \omega_i$ and $n$ times the second moment is $\tau := \sum_{i = 1}^{n} w_i^2$.

Key Concepts

Self-Normalizing Neural-Net (SNN)

A neural network is self-normalizing if it possesses a mapping $g: \Omega \rightarrow \Omega$ for each activation $y$ that maps mean and variance from one layer to the next and has a stable and attracting fixed point depending on $(\omega, \tau)$ in $\Omega$. Furthermore, the mean and variance remain in the domain $\Omega$, that is $g(\Omega) \subseteq \Omega$, where $\Omega = \{ (\mu, \nu) | \mu \in [\mu_{min}, \mu_{max}], \nu \in [\nu_{min}, \nu_{max}] \}$. When iteratively applying the mapping $g$, each point within $\Omega$ converges to this fixed point.

In other words, in SNNs, if the inputs from an earlier layer ($x$) already have their mean and variance within a predefined interval $\Omega$, then the activations to the next layer ($y = f(z = Wx)$) should remain within those intervals. This is true across all pairs of connecting layers as the normalizing effect gets propagated through the network, hence why the term self-normalizing. When the mapping is applied iteratively, it should draw the mean and variance values closer to a fixed point within $\Omega$, the value of which depends on $\omega$ and $\tau$ (recall that they are from the weight vector).

We will design a FNN then construct a g that takes the mean and variance of each layer to those of the next and is a contraction mapping i.e. $g(\mu_i, \nu_i) = (\mu_{i+1}, \nu_{i+1}) \forall i$. It should be noted that although the g required in the SNN definition depends on $(\omega, \tau)$ of an individual layer, the FNN that we construct will have the same values of $(\omega, \tau)$ for each layer. Intuitively this definition can be interpreted as saying that the mean and variance of the final layer of a sufficiently deep SNN will not change when the mean and variance of the input data change. This is because the mean and variance are passing through a contraction mapping at each layer, converging to the mapping's fixed point.

The activation function that makes an SNN possible should meet the following four conditions:

1. It can take on both negative and positive values, so it can normalize the mean;
2. It has a saturation region, so it can dampen variances that are too large;
3. It has a slope larger than one, so it can increase variances that are too small; and
4. It is a continuous curve, which is necessary for the fixed point to exist (see the definition of Banach fixed point theorem to follow).

Commonly used activation functions such as rectified linear units (ReLU), sigmoid, tanh, leaky ReLUs and exponential linear units (ELUs) do not meet all four criteria, therefore, a new activation function is needed.

Scaled Exponential Linear Units (SELUs)

One of the main ideas introduced in this paper is the SELU function. As the name suggests, it is closely related to ELU [3],

$\mathrm{elu}(x) = \begin{cases} x & x > 0 \\ \alpha e^x - \alpha & x \leqslant 0 \end{cases}$

but further builds upon it by introducing a new scale parameter $\lambda$ and proving the exact values that $\alpha$ and $\lambda$ should take on to achieve self-normalization. SELU is defined as:

$\mathrm{selu}(x) = \lambda \begin{cases} x & x > 0 \\ \alpha e^x - \alpha & x \leqslant 0 \end{cases}$

SELUs meet all four criteria listed above - it takes on positive values when $x \gt 0$ and negative values when $x \lt 0$, it has a saturation region when $x$ is a larger negative value, the value of $\lambda$ can be set to greater than one to ensure a slope greater than one, and it is continuous at $x = 0$.

Figure 1 below gives an intuition for how SELUs normalize activations across layers. As shown, a variance dampening effect occurs when inputs are negative and far away from zero, and a variance increasing effect occurs when inputs are close to zero.

Figure 2 below plots the progression of training error on the MNIST and CIFAR10 datasets when training with SNNs versus FNNs with batch normalization at varying model depths. As shown, FNNs that adopted the SELU activation function exhibited lower and less variable training loss compared to using batch normalization, even as the depth increased to 16 and 32 layers.

Banach Fixed Point Theorem and Contraction Mappings

The underlying theory behind SNNs is the Banach fixed point theorem, which states the following: Let $(X, d)$ be a non-empty complete metric space with a contraction mapping $f: X \rightarrow X$. Then $f$ has a unique fixed point $x_f \subseteq X$ with $f(x_f) = x_f$. Every sequence $x_n = f(x_{n-1})$ with starting element $x_0 \subseteq X$ converges to the fixed point: $x_n \underset{n \rightarrow \infty}\rightarrow x_f$.

A contraction mapping is a function $f: X \rightarrow X$ on a metric space $X$ with distance $d$, such that for all points $\mathbf{u}$ and $\mathbf{v}$ in $X$: $d(f(\mathbf{u}), f(\mathbf{v})) \leqslant \delta d(\mathbf{u}, \mathbf{v})$, for a $0 \leqslant \delta \leqslant 1$.

The easiest way to prove a contraction mapping is usually to show that the spectral norm [12] of its Jacobian is less than 1 [13], as was done for this paper.

Proving the Self-Normalizing Property

Mean and Variance Mapping Function

$g$ is derived under the assumption that $x_i$'s are independent but not necessarily having the same mean and variance (2). Under this assumption (and recalling earlier notation of $\omega$ and $\tau$),

\begin{align} \mathrm{E}(z = \mathbf{w}^T \mathbf{x}) = \sum_{i = 1}^n w_i \mathrm{E}(x_i) = \mu \omega \end{align}

\begin{align} \mathrm{Var}(z) = \mathrm{Var}(\sum_{i = 1}^n w_i x_i) = \sum_{i = 1}^n w_i^2 \mathrm{Var}(x_i) = \nu \sum_{i = 1}^n w_i^2 = \nu\tau \textrm{ .} \end{align}

When the weight terms are normalized, $z$ can be viewed as a weighted sum of $x_i$'s. Wide neural net layers with a large number of nodes is common, so $n$ is usually large, and by the Central Limit Theorem, $z$ approaches a normal distribution $\mathcal{N}(\mu\omega, \sqrt{\nu\tau})$.

Using the above property, the exact form for $g$ can be obtained using the definitions for mean and variance of continuous random variables:

Analytical solutions for the integrals can be obtained as follows:

The authors are interested in the fixed point $(\mu, \nu) = (0, 1)$ as these are the parameters associated with the common standard normal distribution. The authors also proposed using normalized weights such that $\omega = \sum_{i = 1}^n = 0$ and $\tau = \sum_{i = 1}^n w_i^2= 1$ as it gives a simpler, cleaner expression for $\widetilde{\mu}$ and $\widetilde{\nu}$ in the calculations in the next steps. This weight scheme can be achieved in several ways, for example, by drawing from a normal distribution $\mathcal{N}(0, \frac{1}{n})$ or from a uniform distribution $U(-\sqrt{3}, \sqrt{3})$.

At $\widetilde{\mu} = \mu = 0$, $\widetilde{\nu} = \nu = 1$, $\omega = 0$ and $\tau = 1$, the constants $\lambda$ and $\alpha$ from the SELU function can be solved for - $\lambda_{01} \approx 1.0507$ and $\alpha_{01} \approx 1.6733$. These values are used throughout the rest of the paper whenever an expression calls for $\lambda$ and $\alpha$.

Details of Moment-Mapping Integrals

Consider the moment-mapping integrals: \begin{align} \widetilde{\mu} & = \int_{-\infty}^\infty \mathrm{selu} (z) p_N(z; \mu \omega, \sqrt{\nu \tau})dz\\ \widetilde{\nu} & = \int_{-\infty}^\infty \mathrm{selu} (z)^2 p_N(z; \mu \omega, \sqrt{\nu \tau}) dz-\widetilde{\mu}^2. \end{align}

The equation for $\widetilde{\mu}$ can be expanded as \begin{align} \widetilde{\mu} & = \frac{\lambda}{2}\left( 2\alpha\int_{-\infty}^0 (\exp(z)-1) p_N(z; \mu \omega, \sqrt{\nu \tau})dz +2\int_{0}^\infty z p_N(z; \mu \omega, \sqrt{\nu \tau})dz \right)\\ &= \frac{\lambda}{2}\left( 2 \alpha \frac{1}{\sqrt{2\pi\tau\nu}} \int_{-\infty}^0 (\exp(z)-1) \exp(\frac{-1}{2\tau \nu} (z-\mu \omega)^2 ) dz +2\frac{1}{\sqrt{2\pi\tau\nu}}\int_{0}^\infty z \exp(\frac{-1}{2\tau \nu} (z-\mu \omega)^2 dz \right)\\ &= \frac{\lambda}{2}\left( 2 \alpha\frac{1}{\sqrt{2\pi\tau\nu}}\int_{-\infty}^0 \exp(z) \exp(\frac{-1}{2\tau \nu} (z-\mu \omega)^2 ) dz - 2 \alpha\frac{1}{\sqrt{2\pi\tau\nu}}\int_{-\infty}^0 \exp(\frac{-1}{2\tau \nu} (z-\mu \omega)^2 ) dz +2\frac{1}{\sqrt{2\pi\tau\nu}}\int_{0}^\infty z \exp(\frac{-1}{2\tau \nu} (z-\mu \omega)^2 dz \right)\\ \end{align}

The first integral can be simplified via the substituiton \begin{align} q:= \frac{1}{\sqrt{2\tau \nu}}(z-\mu \omega -\tau \nu). \end{align} While the second and third can be simplified via the substitution \begin{align} q:= \frac{1}{\sqrt{2\tau \nu}}(z-\mu \omega ). \end{align} Using the definitions of $\mathrm{erf}$ and $\mathrm{erfc}$ then yields the result of the previous section.

Self-Normalizing Property Under Normalized Weights

With the weights normalized, it is possible to calculate the exact value for the spectral norm [12] of $g$'s Jacobian around the fixed point $(\mu, \nu) = (0, 1)$, which turns out to be $0.7877$. Thus, at initialization, SNNs have a stable and attracting fixed point at $(0, 1)$, which means that when $g$ is applied iteratively to a pair $(\mu_{new}, \nu_{new})$, it should draw the points closer to $(0, 1)$. The rate of convergence is determined by the spectral norm [12], whose value depends on $\mu$, $\nu$, $\omega$ and $\tau$.

Self-Normalizing Property Under Unnormalized Weights

As weights are updated during training, there is no guarantee that they would remain normalized. The authors addressed this issue through the first key theorem presented in the paper, which states that a fixed point close to (0, 1) can still be obtained if $\mu$, $\nu$, $\omega$ and $\tau$ are restricted to a specified range.

Additionally, there is no guarantee that the mean and variance of the inputs would stay within the range given by the first theorem, which led to the development of theorems #2 and #3. These two theorems established an upper and lower bound on the variance of inputs if the variance of activations from the previous layer are above or below the range specified, respectively. This ensures that the variance would not explode or vanish after being propagated through the network.

The theorems come with lengthy proofs in the supplementary materials for the paper. High-level proof sketches are presented here.

Theorem 1: Stable and Attracting Fixed Points Close to (0, 1)

Definition: We assume $\alpha = \alpha_{01}$ and $\lambda = \lambda_{01}$. We restrict the range of the variables to the domain $\mu \in [-0.1, 0.1]$, $\omega \in [-0.1, 0.1]$, $\nu \in [0.8, 1.5]$, and $\tau \in [0.9, 1.1]$. For $\omega = 0$ and $\tau = 1$, the mapping has the stable fixed point $(\mu, \nu) = (0, 1$. For other $\omega$ and $\tau$, g has a stable and attracting fixed point depending on $(\omega, \tau)$ in the $(\mu, \nu)$-domain: $\mu \in [-0.03106, 0.06773]$ and $\nu \in [0.80009, 1.48617]$. All points within the $(\mu, \nu)$-domain converge when iteratively applying the mapping to this fixed point.

Proof: In order to show the the mapping $g$ has a stable and attracting fixed point close to $(0, 1)$, the authors again applied Banach's fixed point theorem, which states that a contraction mapping on a nonempty complete metric space that does not map outside its domain has a unique fixed point, and that all points in the $(\mu, \nu)$-domain converge to the fixed point when $g$ is iteratively applied.

The two requirements are proven as follows:

1. g is a contraction mapping.

For $g$ to be a contraction mapping in $\Omega$ with distance $||\cdot||_2$, there must exist a Lipschitz constant $M \lt 1$ such that:

\begin{align} \forall \mu, \nu \in \Omega: ||g(\mu) - g(\nu)||_2 \leqslant M||\mu - \nu||_2 \end{align}

As stated earlier, $g$ is a contraction mapping if the spectral norm [12] of the Jacobian $\mathcal{H}$ (3) is below one, or equivalently, if the the largest singular value of $\mathcal{H}$ is less than 1.

To find the singular values of $\mathcal{H}$, the authors used an explicit formula derived by Blinn [2] for $2\times2$ matrices, which states that the largest singular value of the matrix is $\frac{1}{2}(\sqrt{(a_{11} + a_{22}) ^ 2 + (a_{21} - a{12})^2} + \sqrt{(a_{11} - a_{22}) ^ 2 + (a_{21} + a{12})^2})$.

For $\mathcal{H}$, an expression for the largest singular value of $\mathcal{H}$, made up of the first-order partial derivatives of the mapping $g$ with respect to $\mu$ and $\nu$, can be derived given the analytical solutions for $\widetilde{\mu}$ and $\widetilde{\nu}$ (and denoted $S(\mu, \omega, \nu, \tau, \lambda, \alpha)$).

From the mean value theorem, we know that for a $t \in [0, 1]$,

Therefore, the distance of the singular value at $S(\mu, \omega, \nu, \tau, \lambda_{\mathrm{01}}, \alpha_{\mathrm{01}})$ and at $S(\mu + \Delta\mu, \omega + \Delta\omega, \nu + \Delta\nu, \tau \Delta\tau, \lambda_{\mathrm{01}}, \alpha_{\mathrm{01}})$ can be bounded above by

An upper bound was obtained for each partial derivative term above, mainly through algebraic reformulations and by making use of the fact that many of the functions are monotonically increasing or decreasing on the variables they depend on in $\Omega$ (see pages 17 - 25 in the supplementary materials).

The $\Delta$ terms were then set (rather arbitrarily) to be: $\Delta \mu=0.0068097371$, $\Delta \omega=0.0008292885$, $\Delta \nu=0.0009580840$, and $\Delta \tau=0.0007323095$. Plugging in the upper bounds on the absolute values of the derivative terms for $S$ and the $\Delta$ terms yields

$S(\mu + \Delta \mu,\omega + \Delta \omega,\nu + \Delta \nu,\tau + \Delta \tau,\lambda_{\rm 01},\alpha_{\rm 01}) - S(\mu,\omega,\nu,\tau,\lambda_{\rm 01},\alpha_{\rm 01}) < 0.008747$

Next, the largest singular value is found from a computer-assisted fine grid-search (1) over the domain $\Omega$, with grid lengths $\Delta \mu=0.0068097371$, $\Delta \omega=0.0008292885$, $\Delta \nu=0.0009580840$, and $\Delta \tau=0.0007323095$, which turned out to be $0.9912524171058772$. Therefore,

$S(\mu + \Delta \mu,\omega + \Delta \omega,\nu + \Delta \nu,\tau + \Delta \tau,\lambda_{\rm 01},\alpha_{\rm 01}) \leq 0.9912524171058772 + 0.008747 < 1$

Since the largest singular value is smaller than 1, $g$ is a contraction mapping.

2. g does not map outside its domain.

To prove that $g$ does not map outside of the domain $\mu \in [-0.1, 0.1]$ and $\nu \in [0.8, 1.5]$, lower and upper bounds on $\widetilde{\mu}$ and $\widetilde{\nu}$ were obtained to show that they stay within $\Omega$.

First, it was shown that the derivatives of $\widetilde{\mu}$ and $\widetilde{\xi}$ with respect to $\mu$ and $\nu$ are either positive or have the sign of $\omega$ in $\Omega$, so the minimum and maximum points are found at the borders. In $\Omega$, it then follows that

\begin{align} -0.03106 <\widetilde{\mu}(-0.1,0.1, 0.8, 0.95, \lambda_{\rm 01}, \alpha_{\rm 01}) \leq & \widetilde{\mu} \leq \widetilde{\mu}(0.1,0.1,1.5, 1.1, \lambda_{\rm 01}, \alpha_{\rm 01}) < 0.06773 \end{align}

and

\begin{align} 0.80467 <\widetilde{\xi}(-0.1,0.1, 0.8, 0.95, \lambda_{\rm 01}, \alpha_{\rm 01}) \leq & \widetilde{\xi} \leq \widetilde{\xi}(0.1,0.1,1.5, 1.1, \lambda_{\rm 01}, \alpha_{\rm 01}) < 1.48617. \end{align}

Since $\widetilde{\nu} = \widetilde{\xi} - \widetilde{\mu}^2$,

\begin{align} 0.80009 & \leqslant \widetilde{\nu} \leqslant 1.48617 \end{align}

The bounds on $\widetilde{\mu}$ and $\widetilde{\nu}$ are narrower than those for $\mu$ and $\nu$ set out in $\Omega$, therefore $g(\Omega) \subseteq \Omega$.

Theorem 2: Decreasing Variance from Above

Definition: For $\lambda = \lambda_{01}$, $\alpha = \alpha_{01}$, and the domain $\Omega^+: -1 \leqslant \mu \leqslant 1, -0.1 \leqslant \omega \leqslant 0.1, 3 \leqslant \nu \leqslant 16$, and $0.8 \leqslant \tau \leqslant 1.25$, we have for the mapping of the variance $\widetilde{\nu}(\mu, \omega, \nu, \tau, \lambda, \alpha)$ under $g$: $\widetilde{\nu}(\mu, \omega, \nu, \tau, \lambda, \alpha) \lt \nu$.

Theorem 2 states that when $\nu \in [3, 16]$, the mapping $g$ draws it to below 3 when applied across layers, thereby establishing an upper bound of $\nu \lt 3$ on variance.

Proof: The authors proved the inequality by showing that $g(\mu, \omega, \xi, \tau, \lambda_{01}, \alpha_{01}) = \widetilde{\xi}(\mu, \omega, \xi, \tau, \lambda_{01}, \alpha_{01}) - \nu \lt 0$, since the second moment should be greater than or equal to variance $\widetilde{\nu}$. The behavior of $\frac{\partial }{\partial \mu } \widetilde{\xi}(\mu, \omega, \nu, \tau, \lambda, \alpha)$, $\frac{\partial }{\partial \omega } \widetilde{\xi}(\mu, \omega, \nu, \tau, \lambda, \alpha)$, $\frac{\partial }{\partial \nu } \widetilde{\xi}(\mu, \omega, \nu, \tau, \lambda, \alpha)$, and $\frac{\partial }{\partial \tau } \widetilde{\xi}(\mu, \omega, \nu, \tau, \lambda, \alpha)$ are used to find the bounds on $g(\mu, \omega, \xi, \tau, \lambda_{01}, \alpha_{01})$ (see pages 9 - 13 in the supplementary materials). Again, the partial derivative terms were monotonic, which made it possible to find the upper bound at the board values. It was shown that the maximum value of $g$ does not exceed $-0.0180173$.

Theorem 3: Increasing Variance from Below

Definition: We consider $\lambda = \lambda_{01}$, $\alpha = \alpha_{01}$, and the domain $\Omega^-: -0.1 \leqslant \mu \leqslant 0.1$ and $-0.1 \leqslant \omega \leqslant 0.1$. For the domain $0.02 \leqslant \nu \leqslant 0.16$ and $0.8 \leqslant \tau \leqslant 1.25$ as well as for the domain $0.02 \leqslant \nu \leqslant 0.24$ and $0.9 \leqslant \tau \leqslant 1.25$, the mapping of the variance $\widetilde{\nu}(\mu, \omega, \nu, \tau, \lambda, \alpha)$ increases: $\widetilde{\nu}(\mu, \omega, \nu, \tau, \lambda, \alpha) \gt \nu$.

Theorem 3 states that the variance $\widetilde{\nu}$ increases when variance is smaller than in $\Omega$. The lower bound on variance is $\widetilde{\nu} \gt 0.16$ when $0.8 \leqslant \tau$ and $\widetilde{\nu} \gt 0.24$ when $0.9 \leqslant \tau$ under the proposed mapping.

Proof: According to the mean value theorem, for a $t \in [0, 1]$,

Similar to the proof for Theorem 2 (except we are interested in the smallest $\widetilde{\nu}$ instead of the biggest), the lower bound for $\frac{\partial }{\partial \nu} \widetilde{\xi}(\mu,\omega,\nu+t(\nu_{\mathrm{min}}-\nu),\tau,\lambda_{\rm 01},\alpha_{\rm 01})$ can be derived, and substituted into the relationship $\widetilde{\nu} = \widetilde{\xi}(\mu,\omega,\nu,\tau,\lambda_{\rm 01},\alpha_{\rm 01}) - (\widetilde{\mu}(\mu,\omega,\nu,\tau,\lambda_{\rm 01},\alpha_{\rm 01}))^2$. The lower bound depends on $\tau$ and $\nu$, and in the $\Omega^{-1}$ listed, it is slightly above $\nu$.

Implementation Details

Initialization

As previously explained, SNNs work best when inputs to the network are standardized, and the weights are initialized with mean of 0 and variance of $\frac{1}{n}$ to help converge to the fixed point $(\mu, \nu) = (0, 1)$.

Dropout Technique

The authors reason that regular dropout, randomly setting activations to 0 with probability $1 - q$, is not compatible with SELUs. This is because the low variance region in SELUs is at $\lim_{x \rightarrow -\infty} = -\lambda \alpha$, not 0. Contrast this with ReLUs, which work well with dropout since they have $\lim_{x \rightarrow -\infty} = 0$ as the saturation region. Therefore, a new dropout technique for SELUs was needed, termed alpha dropout.

With alpha dropout, activations are randomly set to $-\lambda\alpha = \alpha'$, which for this paper corresponds to the constant $1.7581$, with probability $1 - q$.

The updated mean and variance of the activations are now: $\mathrm{E}(xd + \alpha'(1 - d)) = \mu q + \alpha'(1 - q)$

and

$\mathrm{Var}(xd + \alpha'(1 - d)) = q((1-q)(\alpha' - \mu)^2 + \nu)$

Activations need to be transformed (e.g. scaled) after dropout to maintain the same mean and variance. In regular dropout, conserving the mean and variance correlates to scaling activations by a factor of 1/q while training. To ensure that mean and variance are unchanged after alpha dropout, the authors used an affine transformation $a(xd + \alpha'(1 - d)) + b$, and solved for the values of $a$ and $b$ to give $a = (\frac{\nu}{q((1-q)(\alpha' - \mu)^2 + \nu)})^{\frac{1}{2}}$ and $b = \mu - a(q\mu + (1-q)\alpha'))$. As the values for $\mu$ and $\nu$ are set to $0$ and $1$ throughout the paper, these expressions can be simplified into $a = (q + \alpha'^2 q(1-q))^{-\frac{1}{2}}$ and $b = -(q + \alpha^2 q (1-q))^{-\frac{1}{2}}((1 - q)\alpha')$, where $\alpha' \approx 1.7581$.

Empirically, the authors found that dropout rates (1-q) of $0.05$ or $0.10$ worked well with SNNs.

Optimizers

Through experiments, the authors found that stochastic gradient descent, momentum, Adadelta and Adamax work well on SNNs. For Adam, configuration parameters $\beta_2 = 0.99$ and $\epsilon = 0.01$ were found to be more effective.

Experimental Results

Three sets of experiments were conducted to compare the performance of SNNs to six other FNN structures and to other machine learning algorithms, such as support vector machines and random forests. The experiments were carried out on (1) 121 UCI Machine Learning Repository datasets, (2) the Tox21 chemical compounds toxicity effects dataset (with 12,000 compounds and 270,000 features), and (3) the HTRU2 dataset of statistics on radio wave signals from pulsar candidates (with 18,000 observations and eight features). In each set of experiment, hyperparameter search was conducted on a validation set to select parameters such as the number of hidden units, number of hidden layers, learning rate, regularization parameter, and dropout rate (see pages 95 - 107 of the supplementary material for exact hyperparameters considered). Whenever models of different setups gave identical results on the validation data, preference was given to the structure with more layers, lower learning rate and higher dropout rate.

The six FNN structures considered were: (1) FNNs with ReLU activations, no normalization and “Microsoft weight initialization” (MSRA) [5] to control the variance of input signals [5]; (2) FNNs with batch normalization [6], in which normalization is applied to activations of the same mini-batch; (3) FNNs with layer normalization [1], in which normalization is applied on a per layer basis for each training example; (4) FNNs with weight normalization [8], whereby each layer’s weights are normalized by learning the weight’s magnitude and direction instead of the weight vector itself; (5) highway networks, in which layers are not restricted to being sequentially connected [9]; and (6) an FNN-version of residual networks [4], with residual blocks made up of two or three densely connected layers.

On the Tox21 dataset, the authors demonstrated the self-normalizing effect by comparing the distribution of neural inputs $z$ at initialization and after 40 epochs of training to that of the standard normal. As Figure 3 show, the distribution of $z$ remained similar to a normal distribution.

On all three sets of classification tasks, the authors demonstrated that SNN outperformed the other FNN counterparts on accuracy and AUC measures, came close to the state-of-the-art results on the Tox21 dataset with an 8-layer network, and produced a new state-of-the-art AUC on predicting pulsars for the HTRU2 dataset by a small margin (achieving an AUC 0.98, averaged over 10 cross-validation folds, versus the previous record of 0.976).

On UCI datasets with fewer than 1,000 observations, SNNs did not outperform SVMs or random forests in terms of average rank in accuracy, but on datasets with at least 1,000 observations, SNNs showed the best overall performance (average rank of 5.8, compared to 6.1 for support vector machines and 6.6 for random forests). Through hyperparameter tuning, it was also discovered that the average depth of FNNs is 10.8 layers, more than the other FNN architectures tried.

Future Work

Although not the focus of this paper, the authors also briefly noted that their initial experiments with applying SELUs on relatively simple CNN structures showed promising results, which is not surprising given that ELUs, which do not have the self-normalizing property, has already been shown to work well with CNNs, demonstrating faster convergence than ReLU networks and even pushing the state-of-the-art error rates on CIFAR-100 at the time of publishing in 2015 [3].

Since the paper was published, SELUs have been adopted by several researchers, not just with FNNs see link, but also with CNNs, GANs, autoencoders, reinforcement learning and RNNs. In a few cases, researchers for those papers concluded that networks trained with SELUs converged faster than those trained with ReLUs, and that SELUs have the same convergence quality as batch normalization. There is potential for SELUs to be incorporated into more architectures in the future.

Critique

Overall, the authors presented a convincing case for using SELUs (along with proper initialization and alpha dropout) on FNNs. FNNs trained with SELU have more layers than those with other normalization techniques, so the work here provides a promising direction for making traditional FNNs more powerful. There are not as many well-established benchmark datasets to evaluate FNNs, but the experiments carried out, particularly on the larger Tox21 dataset, showed that SNNs can be very effective at classification tasks.

The only question I have with the proofs is the lack of explanation for how the domains, $\Omega$, $\Omega^-$ and $\Omega^+$ are determined, which is an important consideration because they are used for deriving the upper and lower bounds on expressions needed for proving the three theorems. The ranges appear somewhat set through trial-and-error and heuristics to ensure the numbers work out (e.g. make the spectral norm [12] of $\mathcal{J}$ as large as can be below 1 so as to ensure $g$ is a contraction mapping), so it is not clear whether they are unique conditions, or that the parameters will remain within those prespecified ranges throughout training; and if the parameters can stray away from the ranges provided, then the issue of what will happen to the self-normalizing property was not addressed. Perhaps that is why the authors gave preference to models with a deeper structure and smaller learning rate during experiments to help the parameters stay within their domains. Further, in addition to the hyperparameters considered, it would be helpful to know the final values that went into the best-performing models, for a better understanding of what range of values work better for SNNs empirically.

Conclusion

The SNN structure proposed in this paper is built on the traditional FNN structure with a few modifications, including the use of SELUs as the activation function (with $\lambda \approx 1.0507$ and $\alpha \approx 1.6733$), alpha dropout, network weights initialized with mean of zero and variance $\frac{1}{n}$, and inputs normalized to mean of zero and variance of one. It is simple to implement while being backed up by detailed theory.

When properly initialized, SELUs will draw neural inputs towards a fixed point of zero mean and unit variance as the activations are propagated through the layers. The self-normalizing property is maintained even when weights deviate from their initial values during training (under mild conditions). When the variance of inputs goes beyond the prespecified range imposed, they are still bounded above and below so SNNs do not suffer from exploding and vanishing gradients. This self-normalizing property allows SNNs to be more robust to perturbations in stochastic gradient descent, so deeper structures with better prediction performance can be built.

In the experiments conducted, the authors demonstrated that SNNs outperformed FNNs trained with other normalization techniques, such as batch, layer and weight normalization, and specialized architectures, such as highway or residual networks, on several classification tasks, including on the UCI Machine Learning Repository datasets. The adoption of SELUs by other researchers also lends credence to the potential for SELUs to be implemented in more neural network architectures.

References

1. Ba, Kiros and Hinton. "Layer Normalization". arXiv:1607.06450. (2016).
2. Blinn. "Consider the Lowly 2X2 Matrix." IEEE Computer Graphics and Applications. (1996).
3. Clevert, Unterthiner, Hochreiter. "Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)." arXiv: 1511.07289. (2015).
4. He, Zhang, Ren and Sun. "Deep Residual Learning for Image Recognition." arXiv:1512.03385. (2015).
5. He, Zhang, Ren and Sun. "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification." arXiv:1502.01852. (2015).
6. Ioffe and Szegedy. "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariance Shift." arXiv:1502.03167. (2015).
7. Klambauer, Unterthiner, Mayr and Hochreiter. "Self-Normalizing Neural Networks." arXiv: 1706.02515. (2017).
8. Salimans and Kingma. "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks." arXiv:1602.07868. (2016).
9. Srivastava, Greff and Schmidhuber. "Highway Networks." arXiv:1505.00387 (2015).
10. Unterthiner, Mayr, Klambauer and Hochreiter. "Toxicity Prediction Using Deep Learning." arXiv:1503.01445. (2015).
11. https://en.wikipedia.org/wiki/Central_limit_theorem
12. http://mathworld.wolfram.com/SpectralNorm.html
13. https://www.math.umd.edu/~petersd/466/fixedpoint.pdf

Online Resources

https://github.com/bioinf-jku/SNNs (GitHub repository maintained by some of the paper's authors)

Footnotes

1. Error propagation analysis: The authors performed an error analysis to quantify the potential numerical imprecisions propagated through the numerous operations performed. The potential imprecision $\epsilon$ was quantified by applying the mean value theorem

$|f(x + \Delta x - f(x)| \leqslant ||\triangledown f(x + t\Delta x|| ||\Delta x|| \textrm{ for } t \in [0, 1]\textrm{.}$

The error propagation rules, or $|f(x + \Delta x - f(x)|$, was first obtained for simple operations such as addition, subtraction, multiplication, division, square root, exponential function, error function and complementary error function. Them, the error bounds on the compound terms making up $\Delta (S(\mu, \omega, \nu, \tau, \lambda, \alpha)$ were found by decomposing them into the simpler expressions. If each of the variables have a precision of $\epsilon$, then it turns out $S$ has a precision better than $292\epsilon$. For a machine with a precision of $2^{-56}$, the rounding error is $\epsilon \approx 10^{-16}$, and $292\epsilon \lt 10^{-13}$. In addition, all computations are correct up to 3 ulps (“unit in last place”) for the hardware architectures and GNU C library used, with 1 ulp being the highest precision that can be achieved.

2. Independence Assumption: The classic definition of central limit theorem requires $x_i$’s to be independent and identically distributed, which is not guaranteed to hold true in a neural network layer. However, according to the Lyapunov CLT, the $x_i$’s do not need to be identically distributed as long as the $(2 + \delta)$th moment exists for the variables and meet the Lyapunov condition for the rate of growth of the sum of the moments [11]. In addition, CLT has also shown to be valid under weak dependence under mixing conditions [11]. Therefore, the authors argue that the central limit theorem can be applied with network inputs.

3. $\mathcal{H}$ versus $\mathcal{J}$ Jacobians: In solving for the largest singular value of the Jacobian $\mathcal{H}$ for the mapping $g: (\mu, \nu)$, the authors first worked with the terms in the Jacobian $\mathcal{J}$ for the mapping $h: (\mu, \nu) \rightarrow (\widetilde{\mu}, \widetilde{\xi})$ instead, because the influence of $\widetilde{\mu}$ on $\widetilde{\nu}$ is small when $\widetilde{\mu}$ is small in $\Omega$ and $\mathcal{H}$ can be easily expressed as terms in $\mathcal{J}$. $\mathcal{J}$ was referenced in the paper, but I used $\mathcal{H}$ in the summary here to avoid confusion.