meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting

From statwiki
Jump to navigation Jump to search

Introduction

A simple and effective technique for neural networks learning is introduced in the current paper. The main technique entails a modification to the vanilla backpropagation algorithm. The idea is that after a forward pass has been carried out in usual fashion, we retain only a subset of the full gradient for computation of model parameters. More precisely, a simple quantization technique is employed to sparsify the gradient vectors, viz., the entries of the first gradient in a backpropagation step are set to zero unless they reach a specified size threshold. The rest of the gradients (the ones with respect to the weights and biases of the neural network) are computed using the chain rule in the typical way using the sparsified gradient obtained from the top layer. Since only a small subset of the weight matrix is modified, we obtain a linear reduction in the computational cost. The experimental results presented in the paper suggest that accuracy is improved rather than being degraded. The name given to the proposed technique is minimal effort back propagation method (meProp).

The backpropagation step in neural network training entails high computational cost since each iteration requires calculation of full gradient vectors and matrices and subsequent update of all model parameters. The main idea of the paper is to find only a small but critical subset of the gradient information and in each learning step, update only this minimal subset of the parameters. This leads to sparsified gradients because only highly relevant parameters are updated and rest remain untouched. Fig. 1 shows an abstract view of the proposed approach.

Two important questions:

1) In the process of stochastic learning how do we find a highly relevant subset of parameters from the current sample?

One solution to this question is given by Top-$k$ search method to identify the most important parameters. Experimental results suggest that if we use this technique, then we can only update 1–4% of the weights at each back propagation pass and this does not result in a larger number of training iterations.

Another likely solution I can think of is the method in the paper: http://papers.nips.cc/paper/6372-learning-the-number-of-neurons-in-deep-networks.pdf. We can use the group sparsity regularizer to identify the neurons that have many nonzero parameters, which are considered highly relevant parameters.

2) Does this process of selecting a small subset of model parameters hurt accuracy?

The results demonstrate that rather than reduce, this sparsification actually improves the accuracy in most settings. This result, while somewhat surprising, is attributed to a dropout-like effect which works to prevent overfitting. Because the minimal effort update does not modify any parameters which are weakly relevant, it seems sensible that this would help avoid overfitting the data.

The authors demonstrate the proposed approach using deep learning approaches (like LSTM, MLP), optimization approaches (like Adam and Adagrad) and tasks like NLP and Image Recognition.

Related Work

Some of the notable related work to this paper are as follows:

In 1990, Tollenaere et al.[1] proposed SuperSAB: an adaptive acceleration strategy for error back propagation learning. They proved that it may converge orders of magnitude faster than the original back propagation algorithm, and is only slightly unstable. In addition, the algorithm is very insensitive to the choice of parameter values, and has excellent scaling properties.

In 1993, Riedmilller et al.[2] and Braun proposed an algorithm called RPROP, to overcome the inherent disadvantages of pure gradient-descent, it performed a local adaptation of the weight-updates according to the behavior of the error function. To be more specific, the authors defined an individual update-value for each weight. When the update of a weight is too large (mathematically, partial derivative of a weight changes its sign), such update-value of this weight will decrease. Otherwise, it'll increase. After the process of adapting update-values is finished, the weight-update process is carried out as follows: when the partial derivative of error function with respect to a specific weight is positive, the original weight decreases by its corresponding update-value, increase otherwise.

In 2014, Srivastava et al.[3] proposed dropout. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much.

The work proposed by the authors of meProp method is quite different from the three related works discussed above.

In 2017, Shazeer et al.[4] presented a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. They used this approach for the machine translation task and concluded that it gave significantly better results. Their method is limited to a specific set of mixture of experts however, the meProp method does not have these sort of limitations.

Proposed Approach

The original back propagation computes the "full gradient" for the input vector and the weight matrix. However, in me-Prop, back propagation computes an "approximate gradient" by keeping top-k values of the backward flowed gradient and masking the remaining values to 0. That is, only the top-k elements with the largest absolute values are kept and rest are made 0.

Authors describe conversion of the traditional back-propagation to the back-propagation used in meProp with a computation unit with one linear transformation and one non-linear transformation as an example as given by eq. (1) and (2): \begin{align*} y &= W x \quad \quad \quad (1) \end{align*} \begin{align*} z &= \sigma (y) \quad \quad \quad (2) \end{align*} where $W \in R_{n \times m}$, $x \in R_m$, $y \in R_n$, $z \in R_n$, $m$ is the dimension of the input vector, $n$ is the dimension of the output vector, and $\sigma$ is a non-linear function (e.g., ReLU, tanh, and sigmoid). During back propagation, we need to compute the gradient of the parameter matrix $W$ and the input vector $x$:

\[ \frac{\partial z}{\partial W_{ij}} = \sigma^{'}_{i}x^{T}_{j},\quad i \quad \epsilon \quad [1,n], \quad j \quad \epsilon \quad [1,m] \quad \quad \quad (3) \]

\[ \frac{\partial z}{\partial x_{i}} = \sum\limits_j W^{T}_{ij} \sigma^{'}_{j},\quad i \quad \epsilon \quad [1,n], \quad j \quad \epsilon \quad [1,m] \quad \quad \quad (4) \]

Since the proposed meProp keeps only top-k elements based on the magnitude values so eq. (3) and (4) get transformed to (5) and (6), respectively:

\[ \frac{\partial z}{\partial W_{ij}} \leftarrow \sigma^{'}_{i}x^{T}_{j}, \quad if \quad i \quad \epsilon \quad \{ t_{1}, t_{2},....., t_{k} \} \quad else \quad 0 \quad \quad (5) \]

\[ \frac{\partial z}{\partial x_{i}} \leftarrow \sum\limits_j W^{T}_{ij} \sigma^{'}_{j}, \quad if \quad j \quad \epsilon \quad \{ t_{1}, t_{2},....., t_{k} \} \quad else \quad 0 \quad \quad (6) \]


The original back-propagation computes the gradient of the matrix W and the gradient of the input vector x as shown in eq. (7) and (8), respectively:

\[ \frac{\partial L}{\partial W} = \frac{\partial L}{\partial y} . \frac{\partial y}{\partial W} \quad \quad (7) \]

\[ \frac{\partial L}{\partial x} = \frac{\partial y}{\partial x} . \frac{\partial L}{\partial y} \quad \quad (8) \]

Since, the proposed meProp selects top-k elements of the traditional gradient to approximate it, hence the gradient of W and gradient of vector x transform to the one shown in eq. (9) and (10):


\[ \frac{\partial L}{\partial W} \leftarrow top_{k}(\frac{\partial L}{\partial y}) . \frac{\partial y}{\partial W} \quad \quad (9) \]

\[ \frac{\partial L}{\partial x} \leftarrow \frac{\partial y}{\partial x} . top_{k}(\frac{\partial L}{\partial y}) \quad \quad (10) \]

The intuition behind the discussed conversions is depicted in Fig. 2.

Where to apply meProp: In the learning task Matrix to Matrix and Matrix to Vector multiplications consumes more than 90% time of backpropagation. Places where the authors apply meProp: -they apply meProp only to the back propagation from the output of the multiplication to its inputs. -they apply meProp (meaning top-k sparsification) to every hidden layer. Because the sparsified gradient will again be dense from one layer to another. -k for the output layer could be different from the k of the hidden layers because of the difference in the dimensionality of the output layers. Example: For example, there are 10 tags in the MNIST task, so the dimension of the output layer is 10, and we use an MLP with the hidden dimension of 500. Thus, the best k for the output layer could be different from that of the hidden layers.

Exception: For other element-wise operations (e.g., activation functions), the original backpropagation procedure remains as it is, because those operations are already fast enough compared with matrix-matrix or matrix-vector multiplication operations.

Choice of top-k algorithms: A variant (focusing on memory reuse) of min heap-based top-k selection method is used. The time complexity is: O(n log k) and space complexity is O(k). This is done to save time on sorting the entire vector. A min-heap is a binary tree such that the data contained in each node is less than (or equal to) the data in that node’s children.

Experiments and Configurations

To establish that the approach is general purpose, authors performed experiments on different deep learning algorithms(i.e. LSTM, MLP) with different optimizers (i.e. Adam, Adagrad) and different problem sets (i.e. Part of Speech Tagging, Transition based dependency parsing, MNIST Image Recognition).

POS-Tag: Part-of-speech tagging is the process of identifying and assigning the parts of speech such as noun,verb, adjectice etc. in a corpus Baseline model: LSTM. Benchmark dataset: Penn Treebank Corpus. For training and testing: Wall Street Journal.

Parsing: Baseline model: MLP. Benchmark dataset: Penn Treebank Corpus. For training, development, and testing: Wall Street Journal. The most common method for evaluating parsers are labeled and unlabeled attachment scores. In this work the authors use the unlabeled attachment score. Labeled attachment refers to the correct matching of a word to its head along with the correct dependency relation. Unlabeled attachment ignores the dependency relation and focuses on the correctness of the assigned head.

MNIST: The MNIST dataset consists of hand-written digits and the solution involves classifying the images among 10 digit classes. Baseline model: MLP. For training, development, and testing: MNIST dataset.

In the configuration for Parsing and MNIST authors use the same k for the output and hidden layers. For POS-Tag authors use different k for the output and hidden layers. Due to low dimensionality of output layer in POS-Tag meProp isn't applied to it.

The code for the paper can be found on Github : https://github.com/jklj077/meProp

Results

meProp is applied to the linear transformations which actually entail the major computational cost. Authors call linear transformation related backprop time as Backprop Time. It does not include the time required for non linear activations which usually entail less than 2% of the computational cost. The total time of back propagation including non linear activations is reported as Overall Backprop Time.

Through results it was observed that meProp substantially speeds up the backpropagation and provides a linear reduction in computational cost. Authors state the main reason for this reduction to be that meProp does not modify weakly relevant parameters, which makes overfitting less likely similar to the dropout effect. Also, the results depict that the proposed approach is independent of specific optimization methods.

The graphs shown in Fig. 4 depict that meProp addresses the problem of overfitting and it provides better accuracy if the top-k weights are selected instead of random weights. It suggests that top-k elements contain the most important information of the gradients. This makes us think, instead of using dropout which randomly turns off few neurons, can it be done more deterministically based on the contribution of neuron to the final prediction or output. Also, it was inferred that meProp can achieve further improvements over dropout for reducing overfitting and a model should take advantage of both meProp and dropout to reduce overfitting. Adding hidden layers does not hurt the performance of the model. Although this may be the case for the current set of test cases, a better understanding of the variation of hidden layer size and choice-of-k can be obtained by varying k with different hidden unit sizes [math]\displaystyle{ h }[/math] by keeping [math]\displaystyle{ k*h }[/math] or a similarly related term constant. This is better studied in [5] where the authors kept [math]\displaystyle{ p*n }[/math] constant to obtain greater reductions in training error for smaller p values ( p being the dropout coefficient. Low p, more units dropped). The relevant numerical results have been shown in table 1-5.

The large consistent sparse matrix can be converted into a dense small matrix by simply removing the zero values. The authors refer to this method as the simplified unified top-$k$ method. The results are presented in Table 6.

Further speed up: For further speeding up the backpropagation on GPUs authors presented a simple unified top-k approach (implementation in PyTorch). The main idea is to treat the entire mini-batch as a "big training example" where top-k operation is based on the averaged values of all examples in the mini-batch. The relevant numerical results have been shown in table 7 and 8.

List of Tables

Critiques

The main idea behind meProp is to wipe out the backprop mechanism of (n-k) nodes where "n" is the number of nodes in the current layer and "k" is the number of nodes contributing to maximum of the loss in that layer. Intuitively, meProp in backpropagation process is actually a threshold w.r.t. k, or an activation function in the gradient backpropagation: only if the gradients are big enough in magnitude that will be passed to previous layer. Referring to equation 10, \[ \frac{\partial L}{\partial x} \leftarrow \frac{\partial y}{\partial x} . top_{k}(\frac{\partial L}{\partial y}) \quad \quad \]

  1. The authors have not proposed any method on how k should be selected, hence it is left to the reader's discretion to possibly take it as a hyperparameter. If so, in a deeply layered architecture, where the weights between each layer are randomly initialized during each execution, "k" might change for each layer since the features learned at each layer may not be the same from the previous layers. However, under the assumption that we only perform top-$k$ selection for the gradient vector associated to the top layer, we do not choose $k$ for each subsequent layer through which we backpropagate. The concern as to whether we may lose valuable feature selection due to hidden layers is a valuable one. Moreover, further study should be carried out to see whether this is in fact the case and if not, whether we can directly sparsify weight matrices of hidden layers.
  2. If the sum of losses caused by the (n-k) nodes in the current layer exceed any of the losses incurred due to "k" nodes, then it would not be correct to drop the (n-k) nodes as we can assume the aggregate (n-k) nodes as a single opaque node with a composite weight which will incur an aggregated loss greater that any of the "k" nodes.

In essence, the idea of selecting "k" nodes to drop-out prove to be effective as shown by the authors, but the lack of information on the conditions on selecting "k" for each layer given the current state of the layer might lead to lack of consistency in the results.

In addition to this, the authors did not include convolutional neural networks in their experiments. It would have been interesting to see whether similar results were observed on that architecture. Theoretically, the method presented in this paper should only update kernels in parts of an image that contribute the most to the loss.

As the experiment settings, all networks are using Adam and AdaGrad, it is an interesting guess that whether the choice of the optimizer will influence the accuracy. The authors did not include the results with SGD(momentum). Since Adam and AdaGrad are using adaptive learning rate for each weight.

There has been no mention by the authors on the loss ( significant loss or the insignificance) of using meProp on tasks where preservation of temporal information and contextual data is important. For example, in tasks like using RNNs for Question-Answering tasks, memory of details of earlier regions of the paragraph could be garbled due to non-updation of the weights not belonging to the top-k set in backpropagation. Indeed, the lack of principled methods for sparsification is a major issue in this case since tasks such as machine translation often entail data where certain parts of an input are much more predictive than other parts in a systematic way. There could be a trade-off between knowledge preservation and choice of the hyperparameter k which can be verified by further analysis like correlation/covariance studies.

The approach can be thought of as a deterministic Dropout giving priority to higher gradient contributing connections during backpropagation. However, unlike dropout (that is random in nature), selecting k-top may permanently exclude some parts of NN from training at all which has not been mentioned in the paper at all. Authors have also failed to test their approach on bigger datasets such as Imagenet, therefore it might be possible that dataset (MNIST) used by the authors might be to simple for the given NN architecture, therefore, meProp approach helped generalizing the model better. It is genrally a bad habit to use MNIST results in 2017's research works, as they shed no light on the real world AI problems. Idea is really simple, basically applying only k strongest gradients during backprop which should work for different architectures as well (LSTM, RNNs). Paper has shown advantage of their method emperically, but only only simple dataset and it is lacking its results in realworld and more complex dataset.

References

  1. Tollenaere, Tom. "SuperSAB: fast adaptive back propagation with good scaling properties." Neural networks 3.5 (1990): 561-573.
  2. Riedmiller, Martin, and Heinrich Braun. "A direct adaptive method for faster backpropagation learning: The RPROP algorithm." Neural Networks, 1993., IEEE International Conference on. IEEE, 1993.
  3. Srivastava, Nitish, et al. "Dropout: a simple way to prevent neural networks from overfitting." Journal of machine learning research 15.1 (2014): 1929-1958.
  4. Shazeer, Noam, et al. "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer." arXiv preprint arXiv:1701.06538 (2017).
  5. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov. "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", Journal of Machine Learning Research 15 (2014) 1929-1958
  6. Speech and Language Processing. Daniel Jurafsky & James H. Martin. 2017. Draft of August 28, 2017.