Difference between revisions of "summary"

XGBoost: A Scalable Tree Boosting System

Jiang, Cong

Song, Ziwei

Ye, Zhaoshan

Zhang, Wenling

Model

Theory

$\mathbf{z}_n = \sigma_n\left(\mathbf{W}_{n}\mathbf{z}_{n-1} + \mathbf{b}_{n}\right), \quad 1 \leq n \leq N$

where $\mathbf{W}_{n} \in \mathbb{R}^{p_n \times p_{n-1}}$ is the weight matrix and $\mathbf{b}_{n} \in \mathbb{R}^{p_n}$ is the bias vector associated with the $n$th layer in the network.

The element-wise vector function $\sigma_n\left(\cdot\right)$ is sigmoid-like for each component in its domain, outputing a value that ranges in $[0,1]$. Typically, the functions $\left\{\sigma_n\left(\cdot\right)\right\}$ are the same for $n \lt N$, but the final output $\sigma_N(\cdot)$ depends on the network architecture—for instance, it may be a softmax function for multi-label classification. Thus the network is completed characterized by its weights and biases as the tuple $(\left\{\mathbf{W}_{n}\right\},\left\{\mathbf{b}_{n}\right\})$.

A sample network for $N = 2$ is depicted below: a graph of an ANN with $N = 2$, where the vertices represent the vectors $\left\{\mathbf{z}_n\right\}_{n=0}^2$ in their respective layers. Edges denote computation, where the vector transformations have been overlaid to show sample dimensions of $\mathbf{W}_1$ and $\mathbf{W}_2$, such that they match the vectors $\mathbf{z}_0$ and $\mathbf{z}_1$.

File:ann.png
Fig 1. Graph of an ANN with $N = 2$, where the vertices represent the vectors $\left\{\mathbf{z}_n\right\}_{n=0}^2$ in their respective layers. Edges denote computation, where the vector transformations have been overlaid to show sample dimensions of $\mathbf{W}_1$ and $\mathbf{W}_2$, such that they match the vectors $\mathbf{z}_0$ and $\mathbf{z}_1$.

A Recurrent Neural Network is a generalization of an ANN for a sequence of inputs $\left\{\mathbf{z}_0^{[t]}\right\}$ where $t \in \left\{1,\ldots,T\right\}$ such that there are recurrent connections between the intermediary vectors $\left\{\mathbf{z}_n\right\}$ for different so-called time steps. These connections are made to represent conditioning on the previous vectors in the sequence: supposing the sequence were a vectorized representation of the words, an input to the network could be: $\left\{\mathbf{z}_0^{},\mathbf{z}_0^{},\mathbf{z}_0^{}\right\} = \left\{\text{pass}, \text{the}, \text{sauce}\right\}$. In a language modelling problem for predictive text, the probability of obtaining $\mathbf{z}_0^{}$ is strongly conditioned on the previous words in the sequence. As such, additional recurrence weight matrices are added to the update rule for $1 \leq n \leq N$ and $t \gt 1$ to produce the recurrent update rule

$\mathbf{z}_n^{[t]} = \sigma_n\left( \mathbf{b}_{n} + \mathbf{W}_{n}\mathbf{z}_{n-1}^{[t]} + \mathbf{R}_n\mathbf{z}_{n}^{[t-1]} \right)$

where $\mathbf{R}_n \in \mathbb{R}^{p_n \times p_n}$ is the recurrence matrix that relates the $n$th layer’s output for item $t$ to its previous output for item $t-1$. The network architecture for a single layer $n$ at step $t$ is pictured below. This is a schematic of an RNN layer $n$ at step $t$ with recurrence on the output of $\mathbf{z}_n^{[t-1]}$, with the dimensions of the matrices $\mathbf{R}_{n}$ and $\mathbf{W}_{n}$ pictured.

File:rnn.png
Fig 2. Schematic of an RNN layer $n$ at step $t$ with recurrence on the output of $\mathbf{z}_n^{[t-1]}$, with the dimensions of the matrices $\mathbf{R}_{n}$ and $\mathbf{W}_{n}$ pictured.

Section

The RNN update rule used by Sutskever et al. comes from a paper by Graves (2013). The connections between layers are denser in this case. The final layer is fully connected to every preceding layer execept for the input $\mathbf{z}_0^{[t]}$, and follows the update rule

$\mathbf{z}_{N}^{[t]} = \sigma_n\left( \mathbf{b}_N + \displaystyle\sum_{n' = 1}^{N-1} \mathbf{W}_{N,n'}\mathbf{z}_{n'}^{[t]} \right)$

where $\mathbf{W}_{N,n'} \in \mathbb{R}^{p_N\times p_{n'}}$ denotes the weight matrix between layer $n'$ and $N$.

The layers 2 through $N-1$ have additional connections to $\mathbf{z}_0^{[t]}$ as

$\mathbf{z}_n^{[t]} = \sigma_n\left( \mathbf{b}_{n} + \mathbf{W}_{n}\mathbf{z}_{n-1}^{[t]} + \mathbf{W}_{n,0}\mathbf{z}_0^{[t]} + \mathbf{R}_n\mathbf{z}_{n}^{[t-1]} \right),$

where, again, $\mathbf{W}_{n,n'}$ must be of size $\mathbb{R}^{p_n\times p_{n'}}$. The first layer has the typical RNN input rule as before,

$\mathbf{z}_{1}^{[t]} = \sigma_1\left( \mathbf{b}_{1} + \mathbf{W}_{1}\mathbf{z}_{0}^{[t]} + \mathbf{R}_{1}\mathbf{z}_{1}^{[t-1]} \right).$

Sample format

Recurrent neural networks are a variation of deep neural networks that are capable of storing information about previous hidden states in special memory layers.<ref name=lstm> Hochreiter, Sepp, and Jürgen Schmidhuber. "Long short-term memory." Neural computation 9.8 (1997): 1735-1780. </ref> Unlike feed forward neural networks that take in a single fixed length vector input and output a fixed length vector output, recurrent neural networks can take in a sequence of fixed length vectors as input, because of their ability to store information and maintain a connection between inputs through this memory layer. By comparison, previous inputs would have no impact on current output for feed forward neural networks, whereas they can impact current input in a recurrent neural network. (This paper used the LSTM formulation from Graves<ref name=grave> Graves, Alex. "Generating sequences with recurrent neural networks." arXiv preprint arXiv:1308.0850 (2013). </ref>)

Where $\,S$ is the base/source sentence, $\,T$ is the paired translated sentence and $\,T_r$ is the total training set. This objective function is to maximize the log probability of a correct translation $\,T$ given the base/source sentence $\,S$ over the entire training set. Once the training is complete, translations are produced by finding the most likely translation according to LSTM:

$\hat{T} = \underset{T}{\operatorname{arg\ max}}\ p(T|S)$

It has been showed that Long Short-Term Memory recurrent neural networks have the ability to generate both discrete and real-valued sequences with complex, long-range structure using next-step prediction <ref name=grave> Reference </ref>.

Training and Results

Results

The resulting LSTM neural networks outperformed standard Statistical Machine Translation (SMT) with a BLEU score of 34.8 against 33.3 and with certain heuristics or modification, was very close to matching the best performing system. Additionally, it could recognize sentences in both active and passive voice as being similar.

Active Voice: I ate an apple.

Passive Voice: The apple was eaten by me.

In summary the LSTM method has proven to be quite capable of translating long sentences despite potentially long delay between input time steps. However, it still falls short of [Edinburgh's specialised statistical model http://www.statmt.org/OSMOSES/sysdesc.pdf].

Open questions

1. Instead of reversing the input sequence the target sequence could be reversed. This would change the time lags between corresponding words in a similar way, but instead of reducing the time lag between the first half of corresponding words, it is reduced between the last half of the words. This might allow conclusions about whether the improved performance is purely due to the reduced minimal time lag or whether structure in natural language is also important (e.g. when a short time lag between the first few words is better than a short time lag between the last few words of sentence).
2. For half of the words the time lag increases to more than the average. Thus, they might have only a minor contribution to the model performance. It could be interesting to see how much the performance is affected by leaving those words out of the input sequence. Or more generally, one could ask, how does the performance related to the number of used input words?