summary: Difference between revisions

From statwiki
Jump to navigation Jump to search
No edit summary
 
(80 intermediate revisions by 4 users not shown)
Line 1: Line 1:
XGBoost: A Scalable Tree Boosting System
'''''XGBoost: A Scalable Tree Boosting System'''''
 
== Presented by ==
== Presented by ==


Line 10: Line 11:
Zhang, Wenling
Zhang, Wenling


= Introduction =
The Extreme Gradiant Boosting, also known as XGBoost, has been the leading model that dominates Kaggle, as well as other machine learning and data mining challenges. The main reason to its success is that this tree boosting system is highly scalable , which means it could be used to solve various problems with significantly less time and fewer resources.
This paper is written by the father of XGBoost, Tianqi Chen and Carlos Guestrin. It gives us an overview on how he used algorithmic optimizations as well as some important systems to develop XGBoost. He explained it in the following manner:
1. Based on gradient tree boosting algorithm, Chen and Guestrin make some modifications to the regularized objective function and introduces an end-to-end tree boosting system. He also applies shrinkage and feature subsampling to prevent overfitting,
2. The system supports existing split finding algorithms including exact greedy and approximate algorithms. For weighted data, the writers propose a distributed weighted quantile sketch algorithm which is provable; while for sparse data, he develops a novel sparsity-aware algorithm which is much faster than the naive model.
3. They claim that storing the data in ''blocks'' could effectively reduce sorting cost, and the cache-aware block structure is highly effective for out-of-core tree learning.
4. XGBoost is evaluated by designed experiments with 4 datasets. The paper compares it with adjusted setting as well as other systems to test its performance.
= Tree Boosting=
=== Regularized Learning Objective ===
Suppose the given data set has m features and n examples. A tree ensemble model uses K additive functions to predict the output.
[[File:eq1.png|center|eq1.png]]
where F is the space of regression trees(CART).
The author suggests the following regularized objective function to make improvements comparing to gradient boosting
[[File:eq2.png|center|eq2.png]]
Regression tree contains continuous scores on each leaf, which is different with decision trees we have learned in class. The following 2 figures represent how to use the decision rules in the trees to classify it into the leaves and calculate the final prediction by summing up the score in the corresponding leaves.
[[File:Picture2.1.png|500px| thumb| center| Figure 0: Prediction Score of Tree Structure.]]


= Introduction =


= Model =
[[File:Picture2.2.png|500px|thumb|center|Figure 1: Tree Ensemble Model. The final prediction for a given example is the sum of predictions from each tree.  ]]


=== Gradient Tree Boosting ===
The tree ensemble model (Equation 2.) is trained in an additive manner. We need to add a new <math>f</math> to our objective.
[[File:eq2.2.png|center|eq2.2.png]]
After applying the second-order Taylor expansion to the above formula and removing constant terms, we can finally have
[[File:eq3.png|center|eq3.png]]
Define <math>I_{j} = \{i| q(X_{i}) \}</math>
as the instance set of leaf j. We can rewrite Equation (3) as
[[File:eq4.png|center|eq4.png]]
By computing the optimal weight and corresponding optimal value, we can use following equation as the scoring function which can measure the quality of a tree structure. Lower score means better tree structure.
[[File:eq5.png|center|eq5.png]]
The following figure shows an example how this score is calculated.
[[File:Picture2.3.png|500px| thumb|center|Figure 2: Structure Score Calculation. We only need to sum up the gradient and second order gradient statistics on each leaf, then apply the scoring formula to get the quality score.  ]]


=== Theory  ===




<math>
Normally it is impossible to enumerate all the possible tree structures. Assume <math>I_{L}</math> and <math>I_{R}</math> are the instance sets of left and right nodes after the split, and <math> I = I_{L} + I_{R}</math>. Then the loss reduction after the split is given by
    \mathbf{z}_n =
[[File:eq6.png|center]]
                \sigma_n\left(\mathbf{W}_{n}\mathbf{z}_{n-1} + \mathbf{b}_{n}\right), \quad 1 \leq n \leq N</math>


where <math>\mathbf{W}_{n} \in \mathbb{R}^{p_n \times p_{n-1}}</math> is the weight matrix and <math>\mathbf{b}_{n} \in \mathbb{R}^{p_n}</math> is the bias vector associated with the <math>n</math>th layer in the network.
=== Shrinkage and Column Subsampling ===
In addition to the regularized objective, the paper also introduces two techniques to prevent over-fitting.  


The element-wise vector function <math>\sigma_n\left(\cdot\right)</math> is sigmoid-like for each component in its domain, outputing a value that ranges in <math>[0,1]</math>. Typically, the functions <math>\left\{\sigma_n\left(\cdot\right)\right\}</math> are the same for <math>n < N</math>, but the final output <math>\sigma_N(\cdot)</math> depends on the network architecture—for instance, it may be a softmax function for multi-label classification. Thus the network is completed characterized by its weights and biases as the tuple <math>(\left\{\mathbf{W}_{n}\right\},\left\{\mathbf{b}_{n}\right\})</math>.
Method 1: Shrinkage. After each step of tree boosting, we scale newly added weights by a shrinkage factor η, which reduces the influence of every individual tree and leaves the space for future trees.  


A sample network for <math>N = 2</math> is depicted below: a graph of an ANN with <math>N = 2</math>, where the vertices represent the vectors <math>\left\{\mathbf{z}_n\right\}_{n=0}^2</math> in their respective layers. Edges denote computation, where the vector transformations have been overlaid to show sample dimensions of <math>\mathbf{W}_1</math> and <math>\mathbf{W}_2</math>, such that they match the vectors <math>\mathbf{z}_0</math> and <math>\mathbf{z}_1</math>.
Method 2: Column (feature) subsampling. We try to build each individual tree with only a portion of predictors chosen randomly. This procedure encourages the variance between the trees (i.e. no same tree occurs). Implemented in an opensource packages, this method prevents over-fitting, compared with the traditional row subsampling, and speeds up the computations of parallel algorithm as well.
<center>
[[File:ann.png | frame | center |Fig 1. Graph of an ANN with <math>N = 2</math>, where the vertices represent the vectors <math>\left\{\mathbf{z}_n\right\}_{n=0}^2</math> in their respective layers. Edges denote computation, where the vector transformations have been overlaid to show sample dimensions of <math>\mathbf{W}_1</math> and <math>\mathbf{W}_2</math>, such that they match the vectors <math>\mathbf{z}_0</math> and <math>\mathbf{z}_1</math>.
]]
</center>


A Recurrent Neural Network is a generalization of an ANN for a ''sequence'' of inputs <math>\left\{\mathbf{z}_0^{[t]}\right\}</math> where <math>t \in
= Split Finding Algorithms =
\left\{1,\ldots,T\right\}</math> such that there are ''recurrent'' connections between the intermediary vectors <math>\left\{\mathbf{z}_n\right\}</math> for different so-called ''time steps''. These connections are made to represent conditioning on the previous vectors in the sequence: supposing the sequence were a vectorized representation of the words, an input to the network could be: <math>\left\{\mathbf{z}_0^{[1]},\mathbf{z}_0^{[2]},\mathbf{z}_0^{[3]}\right\} =
The ways to find the best split can be realized by two main methods.  
\left\{\text{pass}, \text{the}, \text{sauce}\right\}</math>. In a language modelling problem for predictive text, the probability of obtaining <math>\mathbf{z}_0^{[3]}</math> is strongly conditioned on the previous words in the sequence. As such, additional recurrence weight matrices are added to the update rule for <math>1 \leq n \leq N</math> and <math>t > 1</math> to produce the recurrent update rule


<math>
Basic exact greedy algorithm: enumerate over all the possible splits on all the features, which is computationally demanding. Sort the data first, and visit the data to accumulate the gradient statistics greedily.
    \mathbf{z}_n^{[t]} =
        \sigma_n\left(
            \mathbf{b}_{n}
        +  \mathbf{W}_{n}\mathbf{z}_{n-1}^{[t]}
        +  \mathbf{R}_n\mathbf{z}_{n}^{[t-1]}
        \right)</math>


where <math>\mathbf{R}_n \in \mathbb{R}^{p_n \times p_n}</math> is the recurrence matrix that relates the <math>n</math>th layer’s output for item <math>t</math> to its previous output for item <math>t-1</math>. The network architecture for a single layer <math>n</math> at step <math>t</math> is pictured below. This is a schematic of an RNN layer <math>n</math> at step <math>t</math> with recurrence on the output of <math>\mathbf{z}_n^{[t-1]}</math>, with the dimensions of the matrices <math>\mathbf{R}_{n}</math> and <math>\mathbf{W}_{n}</math> pictured.
[[File:exact_greedy.png|500px| thumb|center ]]
<center>
[[File:appro_greedy.png|500px| thumb|center]]
[[File:rnn.png | frame | center |Fig 2. Schematic of an RNN layer <math>n</math> at step <math>t</math> with recurrence on the output of <math>\mathbf{z}_n^{[t-1]}</math>, with the dimensions of the matrices <math>\mathbf{R}_{n}</math> and <math>\mathbf{W}_{n}</math> pictured. ]]
</center>


===Section  ===
Approximate algorithm: To achieve an effective gradient tree boosting model, The author introduces an approximate algorithm. The algorithm first proposes candidate splitting points by the percentiles of feature distribution, maps the continuous features into buckets split and then finds the best solution among the proposals by the aggregated statistics.
The algorithm can be varied by the time the proposal is given.


The RNN update rule used by Sutskever et al. comes from a paper by Graves (2013). The connections between layers are denser in this case. The final layer is fully connected to every preceding layer execept for the input <math>\mathbf{z}_0^{[t]}</math>, and follows the update rule
The global variant: all candidate splits are proposed at the initial phase of tree construction and are used for all split finding. It requires fewer proposal steps.  


<math>
The local variant: the candidate splits are refined after each split. It requires fewer candidate points, so it is potentially more appropriate for deeper trees.
        \mathbf{z}_{N}^{[t]} = \sigma_n\left(
Methods can be chosen according to users’ needs. 
            \mathbf{b}_N
        + \displaystyle\sum_{n' = 1}^{N-1} \mathbf{W}_{N,n'}\mathbf{z}_{n'}^{[t]}
    \right)</math>


where <math>\mathbf{W}_{N,n'} \in \mathbb{R}^{p_N\times p_{n'}}</math> denotes the weight matrix between layer <math>n'</math> and <math>N</math>.
[[File:auc.png|500px| thumb|center]]


The layers 2 through <math>N-1</math> have additional connections to <math>\mathbf{z}_0^{[t]}</math> as
=== Weighted Quantile Sketch ===
Motivation: One key step in the approximate algorithm is to propose candidate split points.  Usually, percentiles of feature are used to make candidates distribute evenly on the data.


let multi-set
<math>
<math>
         \mathbf{z}_n^{[t]} = \sigma_n\left(
         \mathbf{D}_k = { (\mathbf{x}_{1k}, \mathbf{h}_1), (\mathbf{x}_{2k}, \mathbf{h}_2) , ... (\mathbf{x}_{nk}, \mathbf{h}_n) }
            \mathbf{b}_{n}
</math>
        +  \mathbf{W}_{n}\mathbf{z}_{n-1}^{[t]}  
represent the k-th feature values and second order gradient statistics of each training instances.
         \mathbf{W}_{n,0}\mathbf{z}_0^{[t]}  
The define a rank function <math>
        +  \mathbf{R}_n\mathbf{z}_{n}^{[t-1]}
         \mathbf{r}_k : \mathbf{R} \rightarrow [0, \infty)
    \right),</math>
</math>:
[[File:For4.png|500px| thumb|center ]]
which represents the proportion of instances whose feature value <math> k </math> is smaller than <math> z </math>. The goal is to find candidate split points <math> ({\mathbf{s}_{k1}\mathbf{s}_{k2}, ...  \mathbf{s}_{kl} }) </math>
[[File:For5.png|500px| thumb|center ]]
Here <math> \epsilon </math> is an approximation factor. This means that there is roughly <math>  </math> candidate points. 
Here each data point is weighted by <math> h_i </math>, since rewriting Eq (3)
[[File:For3.png|500px| thumb|center ]]
we have
[[File:For6.png|500px| thumb|center ]]
which is exactly weighted squared loss with labels <math> g_i/ h_i </math> with <math> h_i </math>.


where, again, <math>\mathbf{W}_{n,n'}</math> must be of size <math>\mathbb{R}^{p_n\times
=== Sparsity--aware Split Finding ===
    p_{n'}}</math>. The first layer has the typical RNN input rule as before,


<math>
In real world problem, it is quite common for the input <math> X </math> to be sparse. The reasons that causes for sparsity are following: 1. presence of missing values in the data; 2.frequent zero entries in the statistics; and, 3.artifacts of feature engineering such as one-hot encoding.
        \mathbf{z}_{1}^{[t]} = \sigma_1\left(
            \mathbf{b}_{1}
        +  \mathbf{W}_{1}\mathbf{z}_{0}^{[t]}
        +  \mathbf{R}_{1}\mathbf{z}_{1}^{[t-1]}
    \right).
</math>


===Sample format===
[[File:Figure 4.png|500px| thumb|center | Figure 4. Tree structure with default directions. An example will be classified into the default direction when the feature needed for the split is missing]]
[http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/ Recurrent neural networks] are a variation of deep neural networks that are capable of storing information about previous hidden states in special memory layers.<ref name=lstm>
This paper propose to add a default direction in each tree node, which is shown in Figure 4. When a value is missing  in the sparse matrix <math> X </math>, the instance is classified into the default direction. There are two choices of default direction in each branch. The optimal default direction is learn from data. Algorithm 3 shows Sparsity aware Split Finding.  
Hochreiter, Sepp, and Jürgen Schmidhuber. [http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf "Long short-term memory."] Neural computation 9.8 (1997): 1735-1780.
</ref> Unlike feed forward neural networks that take in a single fixed length vector input and output a fixed length vector output, recurrent neural networks can take in a sequence of fixed length vectors as input, because of their ability to store information and maintain a connection between inputs through this memory layer. By comparison, previous inputs would have no impact on current output for feed forward neural networks, whereas they can impact current input in a recurrent neural network. (This paper used the LSTM formulation from Graves<ref name=grave>
Graves, Alex. [http://arxiv.org/pdf/1308.0850.pdf "Generating sequences with recurrent neural networks."] arXiv preprint arXiv:1308.0850 (2013).
</ref>)


[[File:Algorithm 3.png|500px| thumb|center ]]
The important improvement is to only visit the non-missing entries <math> I_k</math>. The presented algorithm treats the non-presence as a missing value and learns the best direction to handle missing values.
XGBoost handles all sparsity patterns in a unified way. More importantly, this paper exploits the sparsity to make computation complexity linear to number of non-missing entries in the input.
[[File:Figure 5.png|500px| thumb|center | Figure 5. Impact of the sparsity aware algorithm on Allstate-10K. The dataset is sparse mainly due to one-hot encoding. The sparsity aware algorithm is more than 50 times faster than the naive version that does not take sparsity into consideration. ]]
Figure 5 shows the comparison of sparsity aware and a naive implementation on an Allstate-10K dataset. The results shows that sparsity aware algorithm runs 50 times faster than the naive version. This confirms the importance of the sparsity aware algorithm.


Where <math>\,S</math> is the base/source sentence, <math>\,T</math> is the paired translated sentence and <math>\,T_r</math> is the total training set. This objective function is to maximize the log probability of a correct translation <math>\,T</math> given the base/source sentence <math>\,S</math> over the entire training set. Once the training is complete, translations are produced by finding the most likely translation according to LSTM:
= System Design =


:<math>\hat{T} = \underset{T}{\operatorname{arg\ max}}\ p(T|S)</math>
=== Column Blocks and Parallelization ===


<br />It has been showed that Long Short-Term Memory recurrent neural networks have the ability to generate both discrete and real-valued sequences with complex, long-range structure using next-step prediction <ref name=grave>
[[File:Figure6.png|500px| thumb|center ]]
Reference
</ref>.


1. Feature values are sorted.


= Training and Results =
2. A block contains one or more feature values.
=== Training Method ===


3. Instance indices are stored in blocks.


4. Missing features are not stored.


=== Results ===
5. With column blocks, a parallel split finding algorithm is easy to design.
The resulting LSTM neural networks outperformed standard Statistical Machine Translation (SMT) with a BLEU score of 34.8 against 33.3 and with certain heuristics or modification, was very close to matching the best performing system. Additionally, it could recognize sentences in both active and passive voice as being similar.
<blockquote>
Active Voice: I ate an apple.
</blockquote>
<blockquote>
Passive Voice: The apple was eaten by me.
</blockquote>


=== Cache Aware Access ===
A thread pre-fetches data from non-continuous memory into a continuous buffer.
The main thread accumulates gradients statistics in the continuous buffer.


In summary the LSTM method has proven to be quite capable of translating long sentences despite potentially long delay between input time steps. However, it still falls short of [Edinburgh's specialised statistical model http://www.statmt.org/OSMOSES/sysdesc.pdf].
=Evaluations=
===System Implementation===
Implemented as an open source package, XGBoost is flexible, portable and reusable. It also supports various languages and ecosystems.


=== Some developments  ===
===Dataset and Setup===
Table 2 summarizes the 4 datasets used in the experiments.
[[File:t2.png|500px| thumb|center ]]




=== Open questions ===
The first three datasets are used for the single machine parallel setting, which is run on a Dell PowerEdge R420 with two 8-core Intel Xeon and memory 64GB. Meanwhile, AWS c3.8xlarge machine is used for experiment in distributed and the out-of-core setting, which uses the criteo dataset. All available cores in the machines are devoted into the experiments, unless otherwise specified.


=== Experiments===
The classification experiment uses Higgs Boson dataset to classify whether an event corresponds to the Higgs boson. The goal is to evaluate XGBoost using Exact Greedy Methods and compare it with column sampling XGBoost and two other well-known systems. Table 3 below summarizes the results.
[[File:t3.png|500px| thumb|center ]]
Overall, both XGBoost and scikit-learn performs better than R.gbm by 25% of Test AUC. In terms of time, however,  XGBoost runs significantly faster than both scikit-learn and R.gbm. Note that XGBoost with column sampling performs slightly worse because the number of key features is limited. We could conclude that XGBoost using the exact greedy algorithm without column sampling works the best under this scenario.


# Instead of reversing the input sequence the target sequence could be reversed. This would change the time lags between corresponding words in a similar way, but instead of reducing the time lag between the first half of corresponding words, it is reduced between the last half of the words. This might allow conclusions about whether the improved performance is purely due to the reduced minimal time lag or whether structure in natural language is also important (e.g. when a short time lag between the first few words is better than a short time lag between the last few words of sentence).
The learning to rank experiment compares XGBoost(using exact greedy algorithm) with pGBRT(using approximate algorithm) in order to evaluate the system's performance on rank problem.
# For half of the words the time lag increases to more than the average. Thus, they might have only a minor contribution to the model performance. It could be interesting to see how much the performance is affected by leaving those words out of the input sequence. Or more generally, one could ask, how does the performance related to the number of used input words?


= Section =
[[File:t4.png|500px| thumb|center ]]
[[File:f10.png|500px| thumb|center ]]
According to Table 4, XGBoost with column sampling performs almost as well as pGBRT, while using only 1/5 of the time. Note that pGBRT is the best previously published system to solve this rank problem.


The out-of-score and distributed experiments demonstrate XGBoost's ability to handle large data with a fast speed.


= Criticisms =
= Conclusion=
This paper propose a scalable end-to-end tree boosting system with cache access patterns, block structure and other essential elements. He also introduces two novel split finding algorithms, one is sparsity-aware algorithm and the other is weighted quantile sketch with proofs. Compared to other popular systems, XGBoost gives state-of-the-art results in different experiment settings, which explains why it it considered one of the best and fastest algorithm for structured or tabular data.


= Source =
= Source =
Chen, T. Guestrin C. (2016) XGBoost: A Scalable Tree Boosting System. Retrieved from http://www.kdd.org/kdd2016/papers/files/rfp0697-chenAemb.pdf. arXiv:1603.02754 [cs.LG].
Chen, T.  (2014) Introduction to Boosted Trees. Retrieved from https://homes.cs.washington.edu/~tqchen/pdf/BoostedTree.pdf.
DMLC. (2015-2016) Introduction to Boosted Trees. Retrieved from https://xgboost.readthedocs.io/en/latest/model.html.

Latest revision as of 17:34, 18 March 2018

XGBoost: A Scalable Tree Boosting System

Presented by

Jiang, Cong

Song, Ziwei

Ye, Zhaoshan

Zhang, Wenling

Introduction

The Extreme Gradiant Boosting, also known as XGBoost, has been the leading model that dominates Kaggle, as well as other machine learning and data mining challenges. The main reason to its success is that this tree boosting system is highly scalable , which means it could be used to solve various problems with significantly less time and fewer resources.

This paper is written by the father of XGBoost, Tianqi Chen and Carlos Guestrin. It gives us an overview on how he used algorithmic optimizations as well as some important systems to develop XGBoost. He explained it in the following manner:

1. Based on gradient tree boosting algorithm, Chen and Guestrin make some modifications to the regularized objective function and introduces an end-to-end tree boosting system. He also applies shrinkage and feature subsampling to prevent overfitting,

2. The system supports existing split finding algorithms including exact greedy and approximate algorithms. For weighted data, the writers propose a distributed weighted quantile sketch algorithm which is provable; while for sparse data, he develops a novel sparsity-aware algorithm which is much faster than the naive model.

3. They claim that storing the data in blocks could effectively reduce sorting cost, and the cache-aware block structure is highly effective for out-of-core tree learning.

4. XGBoost is evaluated by designed experiments with 4 datasets. The paper compares it with adjusted setting as well as other systems to test its performance.

Tree Boosting

Regularized Learning Objective

Suppose the given data set has m features and n examples. A tree ensemble model uses K additive functions to predict the output.

eq1.png
eq1.png

where F is the space of regression trees(CART). The author suggests the following regularized objective function to make improvements comparing to gradient boosting

eq2.png
eq2.png

Regression tree contains continuous scores on each leaf, which is different with decision trees we have learned in class. The following 2 figures represent how to use the decision rules in the trees to classify it into the leaves and calculate the final prediction by summing up the score in the corresponding leaves.

Figure 0: Prediction Score of Tree Structure.


Figure 1: Tree Ensemble Model. The final prediction for a given example is the sum of predictions from each tree.

Gradient Tree Boosting

The tree ensemble model (Equation 2.) is trained in an additive manner. We need to add a new [math]\displaystyle{ f }[/math] to our objective.

eq2.2.png
eq2.2.png

After applying the second-order Taylor expansion to the above formula and removing constant terms, we can finally have

eq3.png
eq3.png

Define [math]\displaystyle{ I_{j} = \{i| q(X_{i}) \} }[/math] as the instance set of leaf j. We can rewrite Equation (3) as

eq4.png
eq4.png

By computing the optimal weight and corresponding optimal value, we can use following equation as the scoring function which can measure the quality of a tree structure. Lower score means better tree structure.

eq5.png
eq5.png

The following figure shows an example how this score is calculated.

Figure 2: Structure Score Calculation. We only need to sum up the gradient and second order gradient statistics on each leaf, then apply the scoring formula to get the quality score.


Normally it is impossible to enumerate all the possible tree structures. Assume [math]\displaystyle{ I_{L} }[/math] and [math]\displaystyle{ I_{R} }[/math] are the instance sets of left and right nodes after the split, and [math]\displaystyle{ I = I_{L} + I_{R} }[/math]. Then the loss reduction after the split is given by

Shrinkage and Column Subsampling

In addition to the regularized objective, the paper also introduces two techniques to prevent over-fitting.

Method 1: Shrinkage. After each step of tree boosting, we scale newly added weights by a shrinkage factor η, which reduces the influence of every individual tree and leaves the space for future trees.

Method 2: Column (feature) subsampling. We try to build each individual tree with only a portion of predictors chosen randomly. This procedure encourages the variance between the trees (i.e. no same tree occurs). Implemented in an opensource packages, this method prevents over-fitting, compared with the traditional row subsampling, and speeds up the computations of parallel algorithm as well.

Split Finding Algorithms

The ways to find the best split can be realized by two main methods.

Basic exact greedy algorithm: enumerate over all the possible splits on all the features, which is computationally demanding. Sort the data first, and visit the data to accumulate the gradient statistics greedily.

Approximate algorithm: To achieve an effective gradient tree boosting model, The author introduces an approximate algorithm. The algorithm first proposes candidate splitting points by the percentiles of feature distribution, maps the continuous features into buckets split and then finds the best solution among the proposals by the aggregated statistics. The algorithm can be varied by the time the proposal is given.

The global variant: all candidate splits are proposed at the initial phase of tree construction and are used for all split finding. It requires fewer proposal steps.

The local variant: the candidate splits are refined after each split. It requires fewer candidate points, so it is potentially more appropriate for deeper trees. Methods can be chosen according to users’ needs.

Weighted Quantile Sketch

Motivation: One key step in the approximate algorithm is to propose candidate split points. Usually, percentiles of feature are used to make candidates distribute evenly on the data.

let multi-set [math]\displaystyle{ \mathbf{D}_k = { (\mathbf{x}_{1k}, \mathbf{h}_1), (\mathbf{x}_{2k}, \mathbf{h}_2) , ... (\mathbf{x}_{nk}, \mathbf{h}_n) } }[/math] represent the k-th feature values and second order gradient statistics of each training instances. The define a rank function [math]\displaystyle{ \mathbf{r}_k : \mathbf{R} \rightarrow [0, \infty) }[/math]:

which represents the proportion of instances whose feature value [math]\displaystyle{ k }[/math] is smaller than [math]\displaystyle{ z }[/math]. The goal is to find candidate split points [math]\displaystyle{ ({\mathbf{s}_{k1}, \mathbf{s}_{k2}, ... \mathbf{s}_{kl} }) }[/math]

Here [math]\displaystyle{ \epsilon }[/math] is an approximation factor. This means that there is roughly [math]\displaystyle{ }[/math] candidate points. Here each data point is weighted by [math]\displaystyle{ h_i }[/math], since rewriting Eq (3)

we have

which is exactly weighted squared loss with labels [math]\displaystyle{ g_i/ h_i }[/math] with [math]\displaystyle{ h_i }[/math].

Sparsity--aware Split Finding

In real world problem, it is quite common for the input [math]\displaystyle{ X }[/math] to be sparse. The reasons that causes for sparsity are following: 1. presence of missing values in the data; 2.frequent zero entries in the statistics; and, 3.artifacts of feature engineering such as one-hot encoding.

Figure 4. Tree structure with default directions. An example will be classified into the default direction when the feature needed for the split is missing

This paper propose to add a default direction in each tree node, which is shown in Figure 4. When a value is missing in the sparse matrix [math]\displaystyle{ X }[/math], the instance is classified into the default direction. There are two choices of default direction in each branch. The optimal default direction is learn from data. Algorithm 3 shows Sparsity aware Split Finding.

The important improvement is to only visit the non-missing entries [math]\displaystyle{ I_k }[/math]. The presented algorithm treats the non-presence as a missing value and learns the best direction to handle missing values. XGBoost handles all sparsity patterns in a unified way. More importantly, this paper exploits the sparsity to make computation complexity linear to number of non-missing entries in the input.

Figure 5. Impact of the sparsity aware algorithm on Allstate-10K. The dataset is sparse mainly due to one-hot encoding. The sparsity aware algorithm is more than 50 times faster than the naive version that does not take sparsity into consideration.

Figure 5 shows the comparison of sparsity aware and a naive implementation on an Allstate-10K dataset. The results shows that sparsity aware algorithm runs 50 times faster than the naive version. This confirms the importance of the sparsity aware algorithm.

System Design

Column Blocks and Parallelization

1. Feature values are sorted.

2. A block contains one or more feature values.

3. Instance indices are stored in blocks.

4. Missing features are not stored.

5. With column blocks, a parallel split finding algorithm is easy to design.

Cache Aware Access

A thread pre-fetches data from non-continuous memory into a continuous buffer. The main thread accumulates gradients statistics in the continuous buffer.

Evaluations

System Implementation

Implemented as an open source package, XGBoost is flexible, portable and reusable. It also supports various languages and ecosystems.

Dataset and Setup

Table 2 summarizes the 4 datasets used in the experiments.


The first three datasets are used for the single machine parallel setting, which is run on a Dell PowerEdge R420 with two 8-core Intel Xeon and memory 64GB. Meanwhile, AWS c3.8xlarge machine is used for experiment in distributed and the out-of-core setting, which uses the criteo dataset. All available cores in the machines are devoted into the experiments, unless otherwise specified.

Experiments

The classification experiment uses Higgs Boson dataset to classify whether an event corresponds to the Higgs boson. The goal is to evaluate XGBoost using Exact Greedy Methods and compare it with column sampling XGBoost and two other well-known systems. Table 3 below summarizes the results.

Overall, both XGBoost and scikit-learn performs better than R.gbm by 25% of Test AUC. In terms of time, however, XGBoost runs significantly faster than both scikit-learn and R.gbm. Note that XGBoost with column sampling performs slightly worse because the number of key features is limited. We could conclude that XGBoost using the exact greedy algorithm without column sampling works the best under this scenario.

The learning to rank experiment compares XGBoost(using exact greedy algorithm) with pGBRT(using approximate algorithm) in order to evaluate the system's performance on rank problem.

According to Table 4, XGBoost with column sampling performs almost as well as pGBRT, while using only 1/5 of the time. Note that pGBRT is the best previously published system to solve this rank problem.

The out-of-score and distributed experiments demonstrate XGBoost's ability to handle large data with a fast speed.

Conclusion

This paper propose a scalable end-to-end tree boosting system with cache access patterns, block structure and other essential elements. He also introduces two novel split finding algorithms, one is sparsity-aware algorithm and the other is weighted quantile sketch with proofs. Compared to other popular systems, XGBoost gives state-of-the-art results in different experiment settings, which explains why it it considered one of the best and fastest algorithm for structured or tabular data.

Source

Chen, T. Guestrin C. (2016) XGBoost: A Scalable Tree Boosting System. Retrieved from http://www.kdd.org/kdd2016/papers/files/rfp0697-chenAemb.pdf. arXiv:1603.02754 [cs.LG].

Chen, T. (2014) Introduction to Boosted Trees. Retrieved from https://homes.cs.washington.edu/~tqchen/pdf/BoostedTree.pdf.

DMLC. (2015-2016) Introduction to Boosted Trees. Retrieved from https://xgboost.readthedocs.io/en/latest/model.html.