http://wiki.math.uwaterloo.ca/statwiki/api.php?action=feedcontributions&user=Q39zhao&feedformat=atomstatwiki - User contributions [US]2022-10-02T20:32:18ZUser contributionsMediaWiki 1.28.3http://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=41769XGBoost: A Scalable Tree Boosting System2018-11-28T22:07:35Z<p>Q39zhao: /* 3.3 Weighted Quantile Sketch */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png|center]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
<math>Obj^{(t)}=\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png|center]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png|center]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png|center]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formally, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
where x's are the feature values of each data point, and h's are the weights of the corresponding x's. <br />
<br />
We can use the following function to rank:<br />
<br />
<math>r_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The output of this function provides a general idea on what is the proportion of data who has k-th feature value less than the k-th feature value of data point z. The objective is to search for split points <br />
<br />
<math>{s_{k1}, s_{k2}, …, s_{kl}}</math><br />
<br />
such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon, s_{k1} = min (x_{ik}), s_{kl} = max (x_{ik}), \forall i</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png|center]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png|center]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png|center]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs <math>O(Kd‖x‖_0 {log_n} )</math><br />
<br />
Tree boosting on block structure costs <math>O(Kd‖x‖_0+ ‖x‖_0 log_n )</math><br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs <math>O(Kd‖x‖_0 log_q )</math><br />
<br />
Approximate algorithm with block structure costs <math>O(Kd‖x‖_0+ ‖x‖_0 log_B )</math><br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png|center]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png|center]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png|center]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png|center]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=41768XGBoost: A Scalable Tree Boosting System2018-11-28T22:07:04Z<p>Q39zhao: /* 3.3 Weighted Quantile Sketch */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png|center]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
<math>Obj^{(t)}=\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png|center]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png|center]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png|center]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formally, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
where x's are the feature values of each data point, and h's are the weights of the corresponding x's. <br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The output of this function provides a general idea on what is the proportion of data who has k-th feature value less than the k-th feature value of data point z. The objective is to search for split points <br />
<br />
<math>{s_{k1}, s_{k2}, …, s_{kl}}</math><br />
<br />
such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon, s_{k1} = min (x_{ik}), s_{kl} = max (x_{ik}), \forall i</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png|center]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png|center]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png|center]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs <math>O(Kd‖x‖_0 {log_n} )</math><br />
<br />
Tree boosting on block structure costs <math>O(Kd‖x‖_0+ ‖x‖_0 log_n )</math><br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs <math>O(Kd‖x‖_0 log_q )</math><br />
<br />
Approximate algorithm with block structure costs <math>O(Kd‖x‖_0+ ‖x‖_0 log_B )</math><br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png|center]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png|center]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png|center]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png|center]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=41767XGBoost: A Scalable Tree Boosting System2018-11-28T22:06:33Z<p>Q39zhao: /* 3.3 Weighted Quantile Sketch */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png|center]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
<math>Obj^{(t)}=\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png|center]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png|center]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png|center]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formally, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
where x's are the feature values of each data point, and h's are the weights of the corresponding x's. <br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The output of this function provides a general idea on what is the proportion of data who has k-th feature value less than the k-th feature value of data point z. The objective is to search for split points <br />
<br />
<math>{s_{k1}, s_{k2}, …, s_{kl}}</math><br />
<br />
such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon, s_{k1} = min (x_{ik}), s_{kl} = max (x_{ik}), /forall i</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png|center]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png|center]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png|center]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs <math>O(Kd‖x‖_0 {log_n} )</math><br />
<br />
Tree boosting on block structure costs <math>O(Kd‖x‖_0+ ‖x‖_0 log_n )</math><br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs <math>O(Kd‖x‖_0 log_q )</math><br />
<br />
Approximate algorithm with block structure costs <math>O(Kd‖x‖_0+ ‖x‖_0 log_B )</math><br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png|center]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png|center]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png|center]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png|center]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=41766XGBoost: A Scalable Tree Boosting System2018-11-28T22:05:45Z<p>Q39zhao: /* 3.3 Weighted Quantile Sketch */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png|center]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
<math>Obj^{(t)}=\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png|center]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png|center]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png|center]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formally, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
where x's are the feature values of each data point, and h's are the weights of the corresponding x's. <br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The output of this function provides a general idea on what is the proportion of data who has k-th feature value less than the k-th feature value of data point z. The objective is to search for split points <br />
<br />
<math>{s_{k1}, s_{k2}, …, s_{kl}}</math><br />
<br />
such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon, s_{k1} = min (x_{ik}), s_{kl} = max (x_{ik})</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png|center]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png|center]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png|center]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs <math>O(Kd‖x‖_0 {log_n} )</math><br />
<br />
Tree boosting on block structure costs <math>O(Kd‖x‖_0+ ‖x‖_0 log_n )</math><br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs <math>O(Kd‖x‖_0 log_q )</math><br />
<br />
Approximate algorithm with block structure costs <math>O(Kd‖x‖_0+ ‖x‖_0 log_B )</math><br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png|center]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png|center]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png|center]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png|center]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=41765XGBoost: A Scalable Tree Boosting System2018-11-28T22:04:59Z<p>Q39zhao: /* 3.3 Weighted Quantile Sketch */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png|center]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
<math>Obj^{(t)}=\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png|center]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png|center]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png|center]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formally, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
where x's are the feature values of each data point, and h's are the weights of the corresponding x's. <br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The output of this function provides a general idea on what is the proportion of data who has k-th feature value less than the k-th feature value of data point z. The objective is to search for split points <br />
<br />
<math>{s_{k1}, s_{k2}, …, s_{kl}}</math><br />
<br />
such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon, s_k1 = min {x_{ik}}, s_kl = max {x_{ik}}</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png|center]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png|center]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png|center]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs <math>O(Kd‖x‖_0 {log_n} )</math><br />
<br />
Tree boosting on block structure costs <math>O(Kd‖x‖_0+ ‖x‖_0 log_n )</math><br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs <math>O(Kd‖x‖_0 log_q )</math><br />
<br />
Approximate algorithm with block structure costs <math>O(Kd‖x‖_0+ ‖x‖_0 log_B )</math><br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png|center]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png|center]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png|center]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png|center]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=41764XGBoost: A Scalable Tree Boosting System2018-11-28T22:04:12Z<p>Q39zhao: /* 3.3 Weighted Quantile Sketch */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png|center]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
<math>Obj^{(t)}=\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png|center]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png|center]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png|center]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formally, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
where x's are the feature values of each data point, and h's are the weights of the corresponding x's. <br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The output of this function provides a general idea on what is the proportion of data who has k-th feature value less than the k-th feature value of data point z. The objective is to search for split points <br />
<br />
<math>{s_{k1}, s_{k2}, …, s_{kl}}</math><br />
<br />
such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon, s_k1 = min x_ik, s_kl = max x_ik</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png|center]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png|center]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png|center]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs <math>O(Kd‖x‖_0 {log_n} )</math><br />
<br />
Tree boosting on block structure costs <math>O(Kd‖x‖_0+ ‖x‖_0 log_n )</math><br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs <math>O(Kd‖x‖_0 log_q )</math><br />
<br />
Approximate algorithm with block structure costs <math>O(Kd‖x‖_0+ ‖x‖_0 log_B )</math><br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png|center]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png|center]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png|center]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png|center]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=41763XGBoost: A Scalable Tree Boosting System2018-11-28T22:02:52Z<p>Q39zhao: /* 3.3 Weighted Quantile Sketch */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png|center]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
<math>Obj^{(t)}=\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png|center]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png|center]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png|center]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formally, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
where x's are the feature values of each data point, and h's are the weights of the corresponding x's. <br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The output of this function provides a general idea on what is the proportion of data who has k-th feature value less than the k-th feature value of data point z. The objective is to search for split points <br />
<br />
<math>{s_{k1}, s_{k2}, …, s_{kl}}</math><br />
<br />
such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png|center]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png|center]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png|center]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs <math>O(Kd‖x‖_0 {log_n} )</math><br />
<br />
Tree boosting on block structure costs <math>O(Kd‖x‖_0+ ‖x‖_0 log_n )</math><br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs <math>O(Kd‖x‖_0 log_q )</math><br />
<br />
Approximate algorithm with block structure costs <math>O(Kd‖x‖_0+ ‖x‖_0 log_B )</math><br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png|center]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png|center]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png|center]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png|center]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=41036XGBoost: A Scalable Tree Boosting System2018-11-22T23:56:39Z<p>Q39zhao: /* 4.1 Column Block for Parallel Learning */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png|center]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
<math>Obj^{(t)}=\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png|center]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png|center]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png|center]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png|center]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png|center]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png|center]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs <math>O(Kd‖x‖_0 {log_n} )</math><br />
<br />
Tree boosting on block structure costs <math>O(Kd‖x‖_0+ ‖x‖_0 log_n )</math><br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs <math>O(Kd‖x‖_0 log_q )</math><br />
<br />
Approximate algorithm with block structure costs <math>O(Kd‖x‖_0+ ‖x‖_0 log_B )</math><br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png|center]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png|center]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png|center]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png|center]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=41035XGBoost: A Scalable Tree Boosting System2018-11-22T23:56:02Z<p>Q39zhao: /* 4.1 Column Block for Parallel Learning */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png|center]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
<math>Obj^{(t)}=\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png|center]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png|center]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png|center]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png|center]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png|center]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png|center]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs <math>O(Kd‖x‖_0 {log_n} )</math><br />
<br />
Tree boosting on block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_n )$<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs $O(Kd‖x‖_0 log_q )$ <br />
<br />
Approximate algorithm with block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_B )$<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png|center]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png|center]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png|center]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png|center]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=41034XGBoost: A Scalable Tree Boosting System2018-11-22T23:55:07Z<p>Q39zhao: /* 4.1 Column Block for Parallel Learning */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png|center]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
<math>Obj^{(t)}=\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png|center]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png|center]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png|center]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png|center]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png|center]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png|center]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs $O(Kd‖x‖_0 {log_n} )$<br />
<br />
Tree boosting on block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_n )$<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs $O(Kd‖x‖_0 log_q )$ <br />
<br />
Approximate algorithm with block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_B )$<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png|center]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png|center]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png|center]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png|center]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=41033XGBoost: A Scalable Tree Boosting System2018-11-22T23:51:31Z<p>Q39zhao: /* 2.1 Regularized Learning Objective */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png|center]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
<math>Obj^{(t)}=\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png|center]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png|center]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png|center]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png|center]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png|center]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png|center]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs $O(Kd‖x‖_0 {log_n} )$<br />
<br />
Tree boosting on block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_n )$<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs $O(Kd‖x‖_0 log_q )$ <br />
<br />
Approximate algorithm with block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_B )$<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png|center]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png|center]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png|center]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png|center]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=41032XGBoost: A Scalable Tree Boosting System2018-11-22T23:50:56Z<p>Q39zhao: /* 2.1 Regularized Learning Objective */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png|center]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
<math>=\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png|center]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png|center]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png|center]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png|center]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png|center]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png|center]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs $O(Kd‖x‖_0 {log_n} )$<br />
<br />
Tree boosting on block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_n )$<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs $O(Kd‖x‖_0 log_q )$ <br />
<br />
Approximate algorithm with block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_B )$<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png|center]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png|center]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png|center]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png|center]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40992XGBoost: A Scalable Tree Boosting System2018-11-22T19:55:09Z<p>Q39zhao: /* 6.3 Classification */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png|center]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png|center]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png|center]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png|center]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png|center]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png|center]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png|center]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs $O(Kd‖x‖_0 {log_n} )$<br />
<br />
Tree boosting on block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_n )$<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs $O(Kd‖x‖_0 log_q )$ <br />
<br />
Approximate algorithm with block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_B )$<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png|center]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png|center]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png|center]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png|center]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40991XGBoost: A Scalable Tree Boosting System2018-11-22T19:54:57Z<p>Q39zhao: /* 6.2 Dataset and Setup */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png|center]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png|center]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png|center]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png|center]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png|center]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png|center]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png|center]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs $O(Kd‖x‖_0 {log_n} )$<br />
<br />
Tree boosting on block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_n )$<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs $O(Kd‖x‖_0 log_q )$ <br />
<br />
Approximate algorithm with block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_B )$<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png|center]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png|center]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png|center]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40990XGBoost: A Scalable Tree Boosting System2018-11-22T19:54:38Z<p>Q39zhao: /* 4.2 Cache-aware Access */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png|center]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png|center]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png|center]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png|center]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png|center]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png|center]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png|center]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs $O(Kd‖x‖_0 {log_n} )$<br />
<br />
Tree boosting on block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_n )$<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs $O(Kd‖x‖_0 log_q )$ <br />
<br />
Approximate algorithm with block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_B )$<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png|center]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png|center]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40988XGBoost: A Scalable Tree Boosting System2018-11-22T19:53:09Z<p>Q39zhao: /* 3.4 Sparsity-aware Split Finding */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png|center]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png|center]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png|center]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png|center]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png|center]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png|center]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png|center]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs $O(Kd‖x‖_0 {log_n} )$<br />
<br />
Tree boosting on block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_n )$<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs $O(Kd‖x‖_0 log_q )$ <br />
<br />
Approximate algorithm with block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_B )$<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40987XGBoost: A Scalable Tree Boosting System2018-11-22T19:52:43Z<p>Q39zhao: /* 3.1 Exact Greedy Algorithm */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png|center]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png|center]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png|center]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png|center]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs $O(Kd‖x‖_0 {log_n} )$<br />
<br />
Tree boosting on block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_n )$<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs $O(Kd‖x‖_0 log_q )$ <br />
<br />
Approximate algorithm with block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_B )$<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40986XGBoost: A Scalable Tree Boosting System2018-11-22T19:52:28Z<p>Q39zhao: /* 2.1 Regularized Learning Objective */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png|center]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png|center]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png|center]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs $O(Kd‖x‖_0 {log_n} )$<br />
<br />
Tree boosting on block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_n )$<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs $O(Kd‖x‖_0 log_q )$ <br />
<br />
Approximate algorithm with block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_B )$<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40985XGBoost: A Scalable Tree Boosting System2018-11-22T19:52:04Z<p>Q39zhao: /* 3.2 Approximate Algorithm */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png|center]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png|center]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs $O(Kd‖x‖_0 {log_n} )$<br />
<br />
Tree boosting on block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_n )$<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs $O(Kd‖x‖_0 log_q )$ <br />
<br />
Approximate algorithm with block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_B )$<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40980XGBoost: A Scalable Tree Boosting System2018-11-22T19:50:27Z<p>Q39zhao: /* Reference */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs $O(Kd‖x‖_0 {log_n} )$<br />
<br />
Tree boosting on block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_n )$<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs $O(Kd‖x‖_0 log_q )$ <br />
<br />
Approximate algorithm with block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_B )$<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40979XGBoost: A Scalable Tree Boosting System2018-11-22T19:50:07Z<p>Q39zhao: </p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs $O(Kd‖x‖_0 {log_n} )$<br />
<br />
Tree boosting on block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_n )$<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs $O(Kd‖x‖_0 log_q )$ <br />
<br />
Approximate algorithm with block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_B )$<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
== Reference ==<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40978XGBoost: A Scalable Tree Boosting System2018-11-22T19:49:49Z<p>Q39zhao: </p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs $O(Kd‖x‖_0 {log_n} )$<br />
<br />
Tree boosting on block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_n )$<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs $O(Kd‖x‖_0 log_q )$ <br />
<br />
Approximate algorithm with block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_B )$<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.<br />
<br />
=== Reference ===<br />
<br />
[1] R,Bekkerman, M. Bilenko, and J.Langford. Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York, NY, USA, 2011<br />
[2] G. Ridgeway. Generalized Boosted Models: A guide to the gym package.<br />
[3] C.Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23-581,2010.<br />
[4] J.Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29(5):1189-1232,2001<br />
[5] T.Zhang and R.Johnson. Learning nonlinear functions using regularized greedy forest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 2014.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40974XGBoost: A Scalable Tree Boosting System2018-11-22T19:25:46Z<p>Q39zhao: /* 2.1 Regularized Learning Objective */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_i), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs $O(Kd‖x‖_0 {log_n} )$<br />
<br />
Tree boosting on block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_n )$<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs $O(Kd‖x‖_0 log_q )$ <br />
<br />
Approximate algorithm with block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_B )$<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40973XGBoost: A Scalable Tree Boosting System2018-11-22T19:23:48Z<p>Q39zhao: /* 4.1 Column Block for Parallel Learning */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs $O(Kd‖x‖_0 {log_n} )$<br />
<br />
Tree boosting on block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_n )$<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs $O(Kd‖x‖_0 log_q )$ <br />
<br />
Approximate algorithm with block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_B )$<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40972XGBoost: A Scalable Tree Boosting System2018-11-22T19:23:28Z<p>Q39zhao: /* 4.1 Column Block for Parallel Learning */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs <math>O(Kd‖x‖_0 {log_n} )</math><br />
<br />
Tree boosting on block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_n )$<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs $O(Kd‖x‖_0 log_q )$ <br />
<br />
Approximate algorithm with block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_B )$<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40971XGBoost: A Scalable Tree Boosting System2018-11-22T19:23:02Z<p>Q39zhao: /* 4.1 Column Block for Parallel Learning */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs $O(Kd‖x‖_0 {log_n} )$<br />
<br />
Tree boosting on block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_n )$<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs $O(Kd‖x‖_0 log_q )$ <br />
<br />
Approximate algorithm with block structure costs $O(Kd‖x‖_0+ ‖x‖_0 log_B )$<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40970XGBoost: A Scalable Tree Boosting System2018-11-22T19:20:42Z<p>Q39zhao: /* 2.1 Regularized Learning Objective */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y_{i}^{(0)} = 0</math><br />
<br />
<math>\hat y_{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y_{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs<br />
<br />
Tree boosting on block structure costs<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs<br />
<br />
Approximate algorithm with block structure costs<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40969XGBoost: A Scalable Tree Boosting System2018-11-22T19:19:43Z<p>Q39zhao: /* 2.1 Regularized Learning Objective */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y{i}^{(0)} = 0</math><br />
<br />
<math>\hat y{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\Delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs<br />
<br />
Tree boosting on block structure costs<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs<br />
<br />
Approximate algorithm with block structure costs<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40968XGBoost: A Scalable Tree Boosting System2018-11-22T19:19:00Z<p>Q39zhao: /* 2.1 Regularized Learning Objective */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y{i}^{(0)} = 0</math><br />
<br />
<math>\hat y{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\delta x) \simeq f(x)+f^{'}(x)\Delta x+\frac{1}{2}f^{''}(x)\Delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs<br />
<br />
Tree boosting on block structure costs<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs<br />
<br />
Approximate algorithm with block structure costs<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40966XGBoost: A Scalable Tree Boosting System2018-11-22T19:08:57Z<p>Q39zhao: </p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== Introduction ==<br />
<br />
In existing society, machine learning and data-driven methods are significant and they use widely. Tree boosting is considered to be one of the best machine learning methods, it provides us state-of-the-art results to solve a wide of range problems. We mainly introduce XGBoost, a scalable end-to-end tree boosting system in this page. We demonstrate the exact greedy algorithm and approximate algorithm. Further, we propose two important parts in the approximate algorithm: novel sparsity-aware algorithm and weighted quantile sketch. For comparison, we explore the reasons why XGBoost become important in many areas.<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y{i}^{(0)} = 0</math><br />
<br />
<math>\hat y{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\delta x) \simeq f(x)+f^{'}(x)\delta x+\frac{1}{2}f^{''}(x)\delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs<br />
<br />
Tree boosting on block structure costs<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs<br />
<br />
Approximate algorithm with block structure costs<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40959XGBoost: A Scalable Tree Boosting System2018-11-22T18:46:06Z<p>Q39zhao: /* 6.6 Distributed Experiment */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y{i}^{(0)} = 0</math><br />
<br />
<math>\hat y{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\delta x) \simeq f(x)+f^{'}(x)\delta x+\frac{1}{2}f^{''}(x)\delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs<br />
<br />
Tree boosting on block structure costs<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs<br />
<br />
Approximate algorithm with block structure costs<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|center|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|center|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40958XGBoost: A Scalable Tree Boosting System2018-11-22T18:45:45Z<p>Q39zhao: /* 6.5 Out-of-core Experiment */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y{i}^{(0)} = 0</math><br />
<br />
<math>\hat y{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\delta x) \simeq f(x)+f^{'}(x)\delta x+\frac{1}{2}f^{''}(x)\delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs<br />
<br />
Tree boosting on block structure costs<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs<br />
<br />
Approximate algorithm with block structure costs<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|center|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40957XGBoost: A Scalable Tree Boosting System2018-11-22T18:45:23Z<p>Q39zhao: /* 6.4 Learning to Rank */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y{i}^{(0)} = 0</math><br />
<br />
<math>\hat y{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\delta x) \simeq f(x)+f^{'}(x)\delta x+\frac{1}{2}f^{''}(x)\delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs<br />
<br />
Tree boosting on block structure costs<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs<br />
<br />
Approximate algorithm with block structure costs<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|center]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40956XGBoost: A Scalable Tree Boosting System2018-11-22T18:45:01Z<p>Q39zhao: /* 6.4 Learning to Rank */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y{i}^{(0)} = 0</math><br />
<br />
<math>\hat y{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\delta x) \simeq f(x)+f^{'}(x)\delta x+\frac{1}{2}f^{''}(x)\delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs<br />
<br />
Tree boosting on block structure costs<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs<br />
<br />
Approximate algorithm with block structure costs<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|left|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png|right]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40955XGBoost: A Scalable Tree Boosting System2018-11-22T18:44:40Z<p>Q39zhao: /* 6.4 Learning to Rank */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y{i}^{(0)} = 0</math><br />
<br />
<math>\hat y{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\delta x) \simeq f(x)+f^{'}(x)\delta x+\frac{1}{2}f^{''}(x)\delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs<br />
<br />
Tree boosting on block structure costs<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs<br />
<br />
Approximate algorithm with block structure costs<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|center|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:machine.png&diff=40954File:machine.png2018-11-22T18:42:30Z<p>Q39zhao: </p>
<hr />
<div></div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:criteo.png&diff=40952File:criteo.png2018-11-22T18:41:54Z<p>Q39zhao: </p>
<hr />
<div></div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40951XGBoost: A Scalable Tree Boosting System2018-11-22T18:41:37Z<p>Q39zhao: /* 6.6 Distributed Experiment */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y{i}^{(0)} = 0</math><br />
<br />
<math>\hat y{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\delta x) \simeq f(x)+f^{'}(x)\delta x+\frac{1}{2}f^{''}(x)\delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs<br />
<br />
Tree boosting on block structure costs<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs<br />
<br />
Approximate algorithm with block structure costs<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|left|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
[[File: Table_4.png]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|thumb|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|thumb|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:Training.png&diff=40950File:Training.png2018-11-22T18:41:09Z<p>Q39zhao: </p>
<hr />
<div></div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40949XGBoost: A Scalable Tree Boosting System2018-11-22T18:40:52Z<p>Q39zhao: /* 6.5 Out-of-core Experiment */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y{i}^{(0)} = 0</math><br />
<br />
<math>\hat y{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\delta x) \simeq f(x)+f^{'}(x)\delta x+\frac{1}{2}f^{''}(x)\delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs<br />
<br />
Tree boosting on block structure costs<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs<br />
<br />
Approximate algorithm with block structure costs<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|left|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
[[File: Table_4.png]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|thumb|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40948XGBoost: A Scalable Tree Boosting System2018-11-22T18:40:27Z<p>Q39zhao: /* 6.4 Learning to Rank */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y{i}^{(0)} = 0</math><br />
<br />
<math>\hat y{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\delta x) \simeq f(x)+f^{'}(x)\delta x+\frac{1}{2}f^{''}(x)\delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs<br />
<br />
Tree boosting on block structure costs<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs<br />
<br />
Approximate algorithm with block structure costs<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|left|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
[[File: Table_4.png]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40947XGBoost: A Scalable Tree Boosting System2018-11-22T18:39:51Z<p>Q39zhao: /* 6.4 Learning to Rank */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y{i}^{(0)} = 0</math><br />
<br />
<math>\hat y{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\delta x) \simeq f(x)+f^{'}(x)\delta x+\frac{1}{2}f^{''}(x)\delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs<br />
<br />
Tree boosting on block structure costs<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs<br />
<br />
Approximate algorithm with block structure costs<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|left|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40946XGBoost: A Scalable Tree Boosting System2018-11-22T18:39:15Z<p>Q39zhao: /* 6.4 Learning to Rank */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y{i}^{(0)} = 0</math><br />
<br />
<math>\hat y{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\delta x) \simeq f(x)+f^{'}(x)\delta x+\frac{1}{2}f^{''}(x)\delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs<br />
<br />
Tree boosting on block structure costs<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs<br />
<br />
Approximate algorithm with block structure costs<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40944XGBoost: A Scalable Tree Boosting System2018-11-22T18:38:43Z<p>Q39zhao: /* 6.5 Out-of-core Experiment */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y{i}^{(0)} = 0</math><br />
<br />
<math>\hat y{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\delta x) \simeq f(x)+f^{'}(x)\delta x+\frac{1}{2}f^{''}(x)\delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs<br />
<br />
Tree boosting on block structure costs<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs<br />
<br />
Approximate algorithm with block structure costs<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|left|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|200px|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:Threads.png&diff=40943File:Threads.png2018-11-22T18:37:55Z<p>Q39zhao: </p>
<hr />
<div></div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40942XGBoost: A Scalable Tree Boosting System2018-11-22T18:37:37Z<p>Q39zhao: /* 6.4 Learning to Rank */</p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y{i}^{(0)} = 0</math><br />
<br />
<math>\hat y{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\delta x) \simeq f(x)+f^{'}(x)\delta x+\frac{1}{2}f^{''}(x)\delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs<br />
<br />
Tree boosting on block structure costs<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs<br />
<br />
Approximate algorithm with block structure costs<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File:Threads.png|200px|thumb|left|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:Table_4.png&diff=40941File:Table 4.png2018-11-22T18:35:15Z<p>Q39zhao: </p>
<hr />
<div></div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:Table_3.png&diff=40940File:Table 3.png2018-11-22T18:34:55Z<p>Q39zhao: </p>
<hr />
<div></div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:Table_2.png&diff=40939File:Table 2.png2018-11-22T18:34:33Z<p>Q39zhao: </p>
<hr />
<div></div>Q39zhaohttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=XGBoost:_A_Scalable_Tree_Boosting_System&diff=40938XGBoost: A Scalable Tree Boosting System2018-11-22T18:34:00Z<p>Q39zhao: </p>
<hr />
<div>== Presented by == <br />
*Qianying Zhao<br />
*Hui Huang<br />
*Lingyun Yi<br />
*Jiayue Zhang<br />
*Siao Chen<br />
*Rongrong Su<br />
*Gezhou Zhang<br />
*Meiyu Zhou<br />
<br />
== 2 Tree Boosting In A Nutshell ==<br />
<br />
=== 2.1 Regularized Learning Objective ===<br />
<br />
1. Regression Decision Tree (also known as classification and regression tree):<br />
* Decision rules are the same as in decision tree<br />
* Contains one score in each leaf value<br />
<br />
[[File:cart.PNG]]<br />
[[File:tree_ensemble_model.PNG]]<br />
<br />
<br />
2. Model and Parameter<br />
<br />
Model: Assuming there are K trees<br />
<math>\hat y_i = \sum^K_{k=1} f_k(x_I), f_k \in Ƒ</math><br />
<br />
Object: <math>Obj = \sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k)</math><br />
<br />
where <math>\sum^n_{i=1}l(y_i,\hat y_i)</math> is training loss, <math>\sum_{k=1}^K \omega(f_k)</math> is complexity of Trees<br />
<br />
So the target function that needed to optimize is:<math>\sum_{i=1}^n l(y_i,\hat y_i)+\sum^K_{k=1}\omega(f_k), f_k \in Ƒ</math>, where <math>\omega(f) = \gamma T+\frac{1}{2}\lambda||w||^2</math><br />
<br />
For example:<br />
<br />
[[File:leave.png]]<br />
<br />
Let's look at <math>\hat y_i</math><br />
<br />
<math>\hat y{i}^{(0)} = 0</math><br />
<br />
<math>\hat y{i}^{(1)} = f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)</math><br />
<br />
<math>\hat y{i}^{(2)} = f_1(x_i) + f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)</math><br />
<br />
...<br />
<br />
<math>\hat y{i}^{(t)} = \sum^t_{i=1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)</math><br />
<br />
So <math>Obj^{(t)} = \sum_{i=1}^n l(y_i,\hat y_i^{(t)})+\sum^t_{i=1}\omega(f_i)</math><br />
<br />
=<math>\sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t(x_i))+omega(f_t)+constant</math><br />
<br />
Take Taylor Expansion of the objective<br />
<br />
<math>f(x+\delta x) \simeq f(x)+f^{'}(x)\delta x+\frac{1}{2}f^{''}(x)\delta x^2</math><br />
<br />
then<br />
<br />
<math>Obj^{(t)} = \sum^n_{i=1}[l(y_i,\hat y_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_i(x_i)]+\omega(f_t)+constant</math><br />
<br />
where <math>g_i =ə_{(\hat y_i)^{(t-1)}}(\hat y_i^{(t-1)}-y_i)^2 = 2(\hat y_i^{(t-1)}-y_i)h_i = ə^2_{(\hat y_i)^{(t-1)}}(\hat y_i^{t-1)}-y_i)^2 =2</math><br />
<br />
Define <math>I_j={i|q(x_i)=j}</math> as the instance set of leaf j and <math>f_t(x_i)=w_j</math>. We can rewrite target function as follows<br />
<br />
<math>Obj^{(t)} = \sum^{T}_{j=1}[(\sum_{i\in I_j} g_i)w_{j}+\frac{1}{2}(\sum_{i\in I_j}h_i + \lambda)w_j^2]+\gamma T</math><br />
<br />
The optimal weight <math>w^*_j</math> of node j is <math>w_j^*=\frac{\sum_{i\in I_j}g_i}{\sum_{i\in I_j}h_i+\lambda}</math><br />
<br />
The loss reduction after the split is given by<br />
<br />
<math>Obj_{split}=\frac{1}{2}[\frac{(\sum_{i \in I_l} g_i)^2}{\sum_{i \in I_l} h_i+\lambda}+\frac{(\sum_{i \in I_R} g_i)^2}{\sum_{i \in I_R} h_i+\lambda}-\frac{(\sum_{i \in I} g_i)^2}{\sum_{i \in I} h_i+\lambda}]-\lambda</math><br />
<br />
== 3 Split Finding Algorithms ==<br />
<br />
=== 3.1 Exact Greedy Algorithm ===<br />
<br />
Exact greedy algorithm is a split finding algorithm enumerates over all the possible splits on all the features. However, it is impossible to efficiently do so when the data does not fit entirely into memory.<br />
<br />
The algorithm is following:<br />
<br />
[[File: Algorithm_1.png]]<br />
<br />
=== 3.2 Approximate Algorithm ===<br />
<br />
Due to limit computational memory and efficiency, the paper gives an approximate algorithm. The algorithm first proposes candidate splitting points according to percentiles of feature distribution, then it maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.<br />
<br />
[[File:Algorithm_2.png]]<br />
<br />
The global variant proposes all the candidate splits during the initial phase of tree construction, and uses the same proposals for split finding at all levels. The local variant re-proposes after each split.<br />
<br />
[[File:iterations.png]]<br />
<br />
From the figure above, the quantile strategy can get the same accuracy as exact greedy given reasonable approximation level.<br />
<br />
=== 3.3 Weighted Quantile Sketch ===<br />
<br />
Data set splitting is one of the most important phase in the approximate algorithm. The most common approach is to split by feature’s percentile in order to obtain an uniform distribution of the selected data.<br />
<br />
Formal, if we have the set<br />
<br />
<math>D_k={(x_{1k},h_1),(x_{2k},h_2),...,(x_{nk},h_n)}</math><br />
<br />
We can use the following function to rank:<br />
<br />
<math>R_k(z) = \frac{1}{\sum_{(x,h) \in D_k} h} \sum_{(x,h) \in D_k, x<z} h,</math><br />
<br />
The objective is to search for split points {s_{k1}, s_{k2}, …, s_{kl}} such that<br />
<br />
<math>|r_k(s_{k,j}) – r_k(s_{k,j+1})| < \epsilon,</math><br />
<br />
Where <math>\epsilon</math> is an approximation factor. In general, it should approximately have <math>\frac{1}{\epsilon}</math> splitting points.<br />
<br />
=== 3.4 Sparsity-aware Split Finding ===<br />
<br />
In real life, the input x may often be quite sparse. Possible causes are:<br />
<br />
1. Data set contains missing values<br />
<br />
2. Large amount of zero entries <br />
<br />
3. Artifacts of feature engineering (ex. One-hot encoding)<br />
<br />
In order to solve the sparsity behavior in the data, it is proposed to create a default direction in each tree node, as shown below:<br />
<br />
[[File: figure_4.png]]<br />
<br />
When a value in a tree node is missing, we can use the following algorithm to calculate the optimal direction to proceed:<br />
<br />
[[File: figure_5.png]]<br />
<br />
This algorithm is also applicable to the situation where user can set a limit on the accepted value, and neglect the out-of-range value when calculating the score.<br />
<br />
The figure below shows the result of the comparison between a basic implementation and the sparsity aware algorithm on a Allstate-10K dataset.<br />
<br />
[[File: Algorithm3.png]]<br />
<br />
We can see that the sparsity aware algorithm performs 50 times better than the simple implementation.<br />
<br />
== 4 System Design ==<br />
<br />
=== 4.1 Column Block for Parallel Learning ===<br />
<br />
Generally, the most time-consuming part of tree learning is to get a sorted data. In XGBoost, data is stored in in-memory units, called Block.<br />
<br />
[[File: Figure_6.png]]<br />
<br />
Each column represents a feature and is sorted by the feature value.<br />
<br />
In exact greedy algorithm, the entire dataset is stored in a single block. So, a single scan over the block will provide us the statistics needed for splitting.<br />
<br />
d = maximum depth of the tree<br />
<br />
K = total number of trees<br />
<br />
Original spase aware algorithm costs<br />
<br />
Tree boosting on block structure costs<br />
<br />
For Approximate algorithm, the dataset can be stored in multiple blocks. Each block contains a subset of tuples in the dataset. The blocks are also in sorted order, so for the quantile finding step, a linear scan over the sorted column is enough.<br />
<br />
q = number of proposal candidates in the dataset<br />
<br />
B = maximum number of rows in each block<br />
<br />
Original approximate algorithm with binary search costs<br />
<br />
Approximate algorithm with block structure costs<br />
<br />
=== 4.2 Cache-aware Access ===<br />
<br />
A naïve implementation of split enumeration brings in immediate read/write dependency between the accumulation and the non-continuous memory fetch operation.<br />
<br />
[[File: figure_8.png]]<br />
<br />
To overcome this issue in exact greedy algorithm, a cache-aware prefetching algorithm with an internal buffer allocated for fetching the gradient statistics is proposed.<br />
For approximate algorithm, since multiple blocks are used for storing the dataset, choosing the correct block size is the key.<br />
<br />
[[File: figure_9.png]]<br />
<br />
An overly small block size results in small workload and inefficient parallelization<br />
<br />
· An overly large block size results in cache misses as the gradient statistics do not fit into the CPU cache<br />
<br />
Through experiment, block size of 216 balances the cache property and parallelization.<br />
<br />
For large size dataset, data might not be fitted into main memory and has to be stored in disk space. So, enabling out-of-core computation is important for achieving scalable learning. It is ideal to have computation run in concurrence with disk reading to reduce the overhead. Two major techniques used to improve the out-of-core computation are shown below:<br />
<br />
1) Block Compression<br />
<br />
* Compress feature value in each block<br />
<br />
* Decompress feature value through the thread<br />
<br />
* Compress ratio: 26~29%<br />
<br />
2) Block Sharding<br />
<br />
* When multiple disks are available<br />
<br />
* Shard the dataset onto multiple disks in an alternative manner<br />
<br />
* Each disk has a pre-fetcher thread to fetch data into an in-memory buffer<br />
<br />
* Training-thread alternatively reads data from each buffer<br />
<br />
== 6 End To End Evaluations ==<br />
<br />
=== 6.1 System Implementation ===<br />
<br />
The system implementation of XGBoost is a portable and reusable open source package. Not only XGBoost supports various weighted classification and objective functions(rank, user-defined), but also supports popular languages(python, R, Julia), data science pipelines (scikit-learn), big-data stacks(Flink, Spark), cloudplatform (Alibaba’s Tianchi8) and more.<br />
<br />
=== 6.2 Dataset and Setup ===<br />
<br />
Four datasets are used in performance evaluations. The first dataset Allstate insurance claim dataset9 that was used for predicting the likelihood of an insurance claim, evaluates the impact of sparsity-aware algorithm. The second dataset Higgs boson dataset10 that was produced from physics simulation events classifies whether an event corresponds to the Higgs boson. The third dataset is the Yahoo! learning for ranking documents by query relevance. The last dataset is the criteo terabyte click log dataset11 that was pre-processed as a tree-based model, evaluates the scaling property of the system in the out-of-core and the distributed settings.The first three datasets are used for the single machine parallel setting, and the last dataset was used for the distributed and out-of-core settings. Boosting trees have a common setting of maximum depth equals 8, shrinkage equals 0.1 and no column subsampling.<br />
<br />
[[File:Table_2.png]]<br />
<br />
=== 6.3 Classification ===<br />
<br />
Four groups of XGBoost performance evaluations are conducted by comparisons. <br />
Compared with R’s GBM, the first evaluation sets XGBoost running using the exact greedy algorithm fairly on Higgs-1M data and have scikit-learn finish running along the side, and shows that XGBoost runs more than 10x faster than scikit-learn. R’s GBM greedily expands one branch of a tree fast but results in lower accuracy, while both scikit-learn and XGBoost learn a full tree. Column subsamples gives slightly worse performance possibly due to few important features in this dataset.<br />
<br />
[[File: Table_3.png]]<br />
<br />
=== 6.4 Learning to Rank ===<br />
<br />
Group 2 evaluates XGBoost on the learning to rank problems by comparing against pGBRT. XGBoost runs the exact greedy algorithm and obviously runs faster. Subsampling columns not only reduces running time, but also gives a bit higher performance, empirically due to that the subsampling helps prevent overfitting. <br />
<br />
[[File: Threads.png|Comparison between XGBoost and PG-BRT on Yahoo LTRC dataset]]<br />
<br />
[[File: Table_4.png]]<br />
<br />
=== 6.5 Out-of-core Experiment ===<br />
<br />
Group 3 evaluates XGBoost system in the out-of-core setting on the criteo data on one AWS c3.8xlarge machine (32 vcores, two 320 GB SSD, 60 GB RAM). Compression helps to speed up computation by factor of 3, and sharding into 2 disks further gives 2x speedup. It’s observed with a less dramatic transition point when the system runs out of file cache due to larger disk throughput and better utilization of computation resources. <br />
<br />
[[File: Training.png|Comparison of out-of-core methods on different subsets of crate data. The missing data points are due to out of disk space.]]<br />
<br />
=== 6.6 Distributed Experiment ===<br />
<br />
Group 4 evaluates the XGBoost system in the distributed setting by setting up a YARN cluster on EC2 with m3.2x large machines with 8 virtual cores each, 30GB RAM, two 80GB SSD local disks, and dataset storage on AWS S3. Comparing against Spark MLLib and H2O 12, in-memory analytics frameworks that need to store the data in RAM, XGBoost can switch to out-of-core setting when it runs out of memory. With the limited computing resources, XGBoost runs faster than the baseline systems, and takes advantage of out-of-core computing and smoothly scale to all 1.7 billion instances, whereas the baseline systems are only able to handle subset of the data. XGBoost’s performance scales linearly as adding more machines, and has large potential to handle even larger data as it managed to handle 1.7 billion data with only 4 machines.<br />
<br />
[[File:criteo.png|Comparison of different distributed systems on 32 EC2 nodes for 10 iterations on different subset of crate data]]<br />
<br />
[[File: machine.png|Scaling of XGboost with different number of machines on criteo full 1.7 billion dataset. Using more machines results in more file cache and makes the system run faster, causing the trend to be slightly super linear.]]<br />
<br />
== Conclusion ==<br />
<br />
The purpose of this paper is discussing how a scalable end-to-end tree boosting system, which is XGBoost, effective used. It helps us to achieve state-of-the-art results on variety experiment challenges. We use the exact greedy algorithm in order to find the best split in tree learning. Be more effective, an approximate algorithm is needed. Therefore we introduce sparsity-aware algorithm and weighted quantile sketch for approximate algorithm. Further, we gain an insight into XGBoost system on column block, cache-aware access patterns, and explain why XGBoost scales is better and wider use than other systems in the existing data statistic.</div>Q39zhao