http://wiki.math.uwaterloo.ca/statwiki/api.php?action=feedcontributions&user=Slngew&feedformat=atomstatwiki - User contributions [US]2022-05-18T10:23:23ZUser contributionsMediaWiki 1.28.3http://wiki.math.uwaterloo.ca/statwiki/index.php?title=CatBoost:_unbiased_boosting_with_categorical_features&diff=50691CatBoost: unbiased boosting with categorical features2021-11-23T05:32:12Z<p>Slngew: /* CatBoost Results */</p>
<hr />
<div>== Presented by == <br />
Jessie Man Wai Chin, Yi Lin Ooi, Yaqi Shi, Shwen Lyng Ngew <br />
<br />
== Introduction == <br />
This paper presents a new boosting strategy: CatBoost with the key techniques of the algorithm. It is a new gradient boosting method and it outperforms other publicly available boosting methods in practice. The implementation of ordered boosting and the innovative way for processing categorical features are the two algorithmic advance introduced in CatBoost. It can deal with the prediction shift problem which is being discussed in details. In this paper, the authors provide the theoretical derivation of the algorithm along with the intuitive examples to demonstrate the strength and the efficiency of the model.<br />
<br />
== Previous Work == <br />
<br />
The CatBoost algorithm is based on the gradient boosting method, which is a powerful machine-learning techniques with solid theoretical basis and strong predicting power. (Still working on it...)<br />
<br />
<br />
There are previous work related to the categorical features:<br />
<br />
<br />
The current architecture is built on the network-in-network approach proposed by Lin et al.[1] for the purpose of increase the representation power of the neural networks. They added additional 1 X 1 convolutional layers, serving as dimension reduction modules to significantly reduce the number of parameters of the model. The paper also took inspiration from the Regions with Convolutional Neural Networks (R-CNN) proposed by Girshick et al. [2]. The overall detection problem is divided into two subproblems: <br />
<br />
1. Utilize low-level cues for potential object proposals<br />
<br />
2. Classify object categories with CNN<br />
<br />
== Motivation == <br />
<br />
Gradient boosting is known to be a popular and powerful machine learning method that could potentially increase the accuracy of a model. It aims to identify strong predictors in a model by performing gradient descent. Nevertheless, there exists a statistical issue which makes gradient boosting less powerful as we intended it to be. The paper identified this issue as prediction shift caused by target leakage.<br />
<br />
A prediction shift happens when there is a shift between the predicted model trained on the training samples and the distribution of the test samples. This often occurs as the predicted model is trained using all training samples and not all training samples are fully representative of the test samples. When the predicted model is trained using all training samples, it could lead to a biased model since the model has seen the labels during its learning stage. In gradient boosting, the current estimate <math>F^t</math> is based on the previous model <math>F^{t-1}</math> we built on earlier in each iteration. To better demonstrate a prediction shift, it essentially means that <math>F^{t-1}(\bf{x_k})|\bf{x_k}</math> where <math>\bf{x_k}</math> comes from training samples is shifted from <math>F^{t-1}(\bf{x})|\bf{x}</math> where <math>\bf{x}</math> comes from testing samples. This is caused by target leakage where the distribution of the estimated model is different from the distribution of the testing samples. Therefore, the model is biased and could lead to overfitting in some cases. To identify this issue, we will introduce ''ordered boosting'' which is the motivation behind categorical boosting.<br />
<br />
<br />
== Ordered Target Statistics ==<br />
<br />
Before the introduction of ordered boosting, we will first introduce the concept of ordered target statistics. The reason why ordered target statistics is used in this algorithm is to avoid overfitting. In ordered target statistics, a random permutation on the training samples is constructed so that we train our model based on the new training samples in which our current model has never seen previously. Define a target statistic as <math>\hat{x}_k^i = E(y | x^i = x_k^i)</math> where <math>k</math> refers to <math>k^{th}</math> training samples and <math>i</math> refers to <math>i^{th}</math> categorical feature.<br />
<br />
In ordered target statistics, a random permutation <math>\sigma</math> is generated to help the algorithm to obtain training samples sequentially. For instance, if we would like to obtain <math>k^{th}</math> training samples, then <math>D_k = {x_j : \sigma(j)<\sigma(k)}</math>. Through this method, we can ensure that the model is unbiased as the trained model is obtained based on the new samples in which the model has never seen. In the traditional boosting method, the model is obtained based on all the training samples which could possibly lead to overfitting as we train the model based on what the model has seen previously. Therefore, the issue of overfitting motivates the idea of ordered target statistics which we will use to generate the permutations of our training samples.<br />
<br />
== Ordered Boosting ==<br />
<br />
To overcome the prediction shift discussed in the previous section, ''ordered boosting'' is introduced. In ordered boosting, a new training dataset is obtained in each step of boosting. This means that the model is trained such that the model we obtained previously is applied to this set of new training samples. This guarantees that the model in which we have obtained previously has not seen the labels in the new training set. Therefore, the trained model at each step of boosting is not biased. To better illustrate this, assuming <math>M_t</math> is our current model in <math>{t+1}^{th}</math> iteration which is learned using only the first t samples, the residual <math>r^{t+1} = y_i - M_{t}</math> is unshifted since the model is trained without the observations in the new training set. This is supported by the generation of random permutations <math>\sigma</math> of our training samples. To illustrate this idea, please see Figure (1).<br />
[[File:Ordered boosting principle.png | center]]<br />
<div align="center">Figure 1: Ordered boosting principle, ordered by <math>\sigma</math></div><br />
<br />
In ordered boosting, the algorithm starts with generating <math>s+1</math> independent random permutations <math>\sigma_0, \sigma_1, \dots, \sigma_s</math> of the training dataset. <math>\sigma_1, \dots, \sigma_s</math> contributes to the internal nodes of a tree where the evaluation of the splits takes place. <math>\sigma_0</math> contributes to the terminal nodes where the leaf value is determined. the algorithm starts with building a decision tree based on the base predictors. The same splitting criterion is used in building a decision tree in the context of ordered boosting. Note that the trees built under this model are symmetric as this helps to improvise the algorithm in terms of its runtime. <br />
<br />
Define <math>M_{r,j}(i)</math> as our current model for the ''i''-th sample based on the first ''j'' samples. A random permutation <math>\sigma_r</math> is sampled from the <math>s+1</math> independent random permutations of the training dataset. The random permutation, says <math>\sigma_r</math>, will then contribute to our construction of tree, <math>T_t</math>. <br />
<br />
A gradient is computed at each step such that <math>grad_{r,j}(i)=\frac{\delta{L(y_i,M_{r,j}(i))}}{\delta{s}}</math>. The leaf value for the tree constructed based on the ''i''-th sample will then be equals to the average of the gradients computed, which can be written as:<br />
$$\text{avg } grad_{r, \sigma_r(i)-1}$$<br />
When a tree, <math>T_t</math> is constructed, the tree itself is used to boost all the existing models <math>M_{r',j}</math>. Finally, in our final model, <math>F</math>, the leaf values are computed using the standard gradient boosting procedure. Recall that <math>\sigma_0</math> is used to determine the final leaf values. Therefore, once we obtain our final model, we match the training samples, says <math>i^{th}</math> sample, to <math>leaf_0(i)</math> and so on. See Figure 2 for the detailed algorithm of how CatBoost build a decision tree.<br />
[[File:Catboost tree algorithm.png | center]]<br />
<div align="center">Figure 2: Algorithm to build a tree in Catboost</div><br />
<br />
<br />
<br />
<br />
== CatBoost Results ==<br />
CatBoost is implemented as an algorithm for gradient boosting on decision trees. The algorithm is compared with the most popular open-source libraries – XGBoost and LightBGM. Categorical features are preprocessed using Ordered TS method for CatBoost. The results measured by logloss and zero-one loss are presented in Table 1. The statistical significance of performance is also presented in Table 1. <br />
<br />
[[File:Table1.png | center|500px|Image: 500 pixels]]<br />
<br />
<div align="center">Table 1: Comparison with baselines: logloss / zero-one loss (relative increase for baselines).</div><br />
<br />
Besides, the running time for CatBoost Plain and LightGBM are the fastest ones followed by Ordered mode, which is about 1.7 times slower. <br />
<br />
Comparison between boosting modes of CatBoost: Plain and Ordered is performed on all considered datasets, and the results are presented in Table 2. It is evident that Ordered mode is especially effective on smaller datasets, which supports the hypothesis that a larger bias has a negative effect on performance.<br />
<br />
[[File:Table2.png | center|600px|Image: 600 pixels]]<br />
<br />
<div align="center">Table 2: Plain boosting mode: logloss, zeroone loss and their change relative to Ordered boosting mode.</div><br />
<br />
As illustrated in Figure 2, CatBoost is trained in Ordered and Plain modes on randomly filtered datasets to compare the generated losses. The relative performance of Plain modes becomes worse as the dataset size decreases. <br />
<br />
[[File:Figure2.png | center|500px|Image: 500 pixels]]<br />
<br />
<div align="center">Figure 2: Relative error of Plain boosting mode compared to Ordered boosting mode depending on the fraction of the dataset. </div><br />
<br />
Different TSs options of CatBoost in Ordered boosting mode are being compared and the outcomes can be found in Table 3. Ordered TS used in CatBoost outscored all other approaches significantly. Among the baselines, the holdout TS is the best for a majority of the datasets. <br />
<br />
[[File:Table3.png | center|500px|Image: 500 pixels]]<br />
<br />
<div align="center">Table 3: Comparison of target statistics, relative change in logloss / zero-one loss compared to ordered TS. </div><br />
<br />
== Conclusion ==<br />
<br />
CatBoost has two unique advancements: the implementation of ordered boosting and ordered TS to tackle the problem of prediction shift caused by target leakage present in all existing implementations of gradient boosting. The empirical result shows that CatBoost overperforms other GBDT packages on most of the datasets and thus CatBoost is an effective tool for use in supervised machine learning technique. <br />
<br />
== Critiques ==<br />
<br />
Strength<br />
<br />
1) The proposed algorithm and current statistical issues are presented in a simple manner<br />
<br />
2) Solid empirical study of the new algorithm compared to current methodologies such as XGBoost and lightGBM<br />
<br />
<br />
Weakness<br />
<br />
1) This paper performs the experiment on heterogenous, categorical datasets. It does not show the performance of CatBoost relative to XGBoost and LightBoost when working with homogenous data. Hence, it may be biased to state that CatBoost outperforms other leading GBDT packages. <br />
<br />
2) The tasks in the experiment are classification tasks. Author should include regression task to determine if the result holds for regression problem.<br />
<br />
3) The hyper-parameter setting such as iteration, maximum tree depth and so on can impact CatBoost's performance relative to other GBDT implementation. CatBoost's Oblivious Decision Tree (ODT) are balanced by definition. The "maximum tree depth" will have an impact on the amount of memory and running time. Iteration specifies the maximum number of decision tree CatBoost will construct. It is possible that the author tune their hyperpararameter so that it looks better than for the baselines.<br />
<br />
== References ==<br />
<br />
[1] L. Bottou and Y. L. Cun. Large scale online learning. In Advances in neural information processing systems, pages 217–224, 2004.<br />
<br />
[2] L. Breiman. Out-of-bag estimation, 1996.<br />
<br />
[3] L. Breiman. Using iterated bagging to debias regressions. Machine Learning, 45(3):261–277, 2001.<br />
<br />
[4] L. Breiman, J. Friedman, C. J. Stone, and R. A. Olshen. Classification and regression trees. CRC press, 1984.<br />
<br />
[5] R. Caruana and A. Niculescu-Mizil. An empirical comparison of supervised learning algorithms. In Proceedings of the 23rd international conference on Machine learning, pages 161–168. ACM, 2006.<br />
<br />
[6] B. Cestnik et al. Estimating probabilities: a crucial task in machine learning. In ECAI, volume 90, pages 147–149, 1990.<br />
<br />
[7] O. Chapelle, E. Manavoglu, and R. Rosales. Simple and scalable response prediction for display advertising. ACM Transactions on Intelligent Systems and Technology (TIST), 5(4):61, 2015.<br />
<br />
[8] T. Chen and C. Guestrin. Xgboost: A scalable tree boosting system. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 785–794. ACM, 2016.<br />
<br />
[9] M. Ferov and M. Modr `y. Enhancing lambdamart using oblivious trees. arXiv preprint arXiv:1609.05610, 2016.<br />
<br />
[10] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. The annals of statistics, 28(2):337–407, 2000.<br />
<br />
[11] J. Friedman, T. Hastie, and R. Tibshirani. The elements of statistical learning, volume 1. Springer series in statistics New York, 2001.<br />
<br />
[12] J. H. Friedman. Greedy function approximation: a gradient boosting machine. Annals of statistics, pages 1189–1232, 2001.<br />
<br />
[13] J. H. Friedman. Stochastic gradient boosting. Computational Statistics & Data Analysis, 38(4):367–378, 2002.<br />
<br />
[14] A. Gulin, I. Kuralenok, and D. Pavlov. Winning the transfer learning track of yahoo!’s learning to rank challenge with yetirank. In Yahoo! Learning to Rank Challenge, pages 63–76, 2011.<br />
<br />
[15] X. He, J. Pan, O. Jin, T. Xu, B. Liu, T. Xu, Y. Shi, A. Atallah, R. Herbrich, S. Bowers, et al. Practical lessons from predicting clicks on ads at facebook. In Proceedings of the Eighth International Workshop on Data Mining for Online Advertising, pages 1–9. ACM, 2014.<br />
<br />
[16] G. Ke, Q. Meng, T. Finley, T. Wang, W. Chen, W. Ma, Q. Ye, and T.-Y. Liu. Lightgbm: A highly efficient gradient boosting decision tree. In Advances in Neural Information Processing Systems, pages 3149–3157, 2017.<br />
<br />
[17] M. Kearns and L. Valiant. Cryptographic limitations on learning boolean formulae and finite automata. Journal of the ACM (JACM), 41(1):67–95, 1994.<br />
<br />
[18] J. Langford, L. Li, and T. Zhang. Sparse online learning via truncated gradient. Journal of Machine Learning Research, 10(Mar):777–801, 2009. <br />
<br />
[19] LightGBM. Categorical feature support. http://lightgbm.readthedocs.io/en/latest/Advanced-Topics.html#categorical-feature-support, 2017.<br />
<br />
[20] LightGBM. Optimal split for categorical features. http://lightgbm.readthedocs.io/en/latest/Features.html#optimal-split-for-categorical-features, 2017.<br />
<br />
[21] LightGBM. feature_histogram.cpp. https://github.com/Microsoft/LightGBM/blob/master/src/treelearner/feature_histogram.hpp, 2018.<br />
<br />
[22] X. Ling, W. Deng, C. Gu, H. Zhou, C. Li, and F. Sun. Model ensemble for click prediction in bing search ads. In Proceedings of the 26th International Conference on World Wide Web Companion, pages 689–698. International World Wide Web Conferences Steering Committee, 2017.<br />
<br />
[23] Y. Lou and M. Obukhov. Bdt: Gradient boosted decision tables for high accuracy and scoring efficiency. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1893–1901. ACM, 2017.<br />
<br />
[24] L. Mason, J. Baxter, P. L. Bartlett, and M. R. Frean. Boosting algorithms as gradient descent. In Advances in neural information processing systems, pages 512–518, 2000.<br />
<br />
[25] D. Micci-Barreca. A preprocessing scheme for high-cardinality categorical attributes in classifi-cation and prediction problems. ACM SIGKDD Explorations Newsletter, 3(1):27–32, 2001.<br />
<br />
[26] B. P. Roe, H.-J. Yang, J. Zhu, Y. Liu, I. Stancu, and G. McGregor. Boosted decision trees as an alternative to artificial neural networks for particle identification. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 543(2):577–584, 2005.<br />
<br />
[27] L. Rokach and O. Maimon. Top–down induction of decision trees classifiers — a survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 35(4):476–487, 2005.<br />
<br />
[28] D. B. Rubin. The bayesian bootstrap. The annals of statistics, pages 130–134, 1981.<br />
<br />
[29] Q. Wu, C. J. Burges, K. M. Svore, and J. Gao. Adapting boosting for information retrieval measures. Information Retrieval, 13(3):254–270, 2010.<br />
<br />
[30] K. Zhang, B. Schölkopf, K. Muandet, and Z. Wang. Domain adaptation under target and conditional shift. In International Conference on Machine Learning, pages 819–827, 2013.<br />
<br />
[31] O. Zhang. Winning data science competitions. https://www.slideshare.net/ShangxuanZhang/winning-data-science-competitions-presented-by-owen-zhang,2015.<br />
<br />
[32] Y. Zhang and A. Haghani. A gradient boosting method to improve travel time prediction. Transportation Research Part C: Emerging Technologies, 58:308–324, 2015.</div>Slngewhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:Figure2.png&diff=50685File:Figure2.png2021-11-23T05:24:04Z<p>Slngew: Slngew uploaded a new version of File:Figure2.png</p>
<hr />
<div></div>Slngewhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:Table3.png&diff=50684File:Table3.png2021-11-23T05:23:48Z<p>Slngew: </p>
<hr />
<div></div>Slngewhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:Table2.png&diff=50683File:Table2.png2021-11-23T05:23:35Z<p>Slngew: Slngew uploaded a new version of File:Table2.png</p>
<hr />
<div></div>Slngewhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:Table1.png&diff=50682File:Table1.png2021-11-23T05:22:44Z<p>Slngew: Slngew uploaded a new version of File:Table1.png</p>
<hr />
<div></div>Slngewhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=F21-STAT_441/841_CM_763-Proposal&diff=49979F21-STAT 441/841 CM 763-Proposal2021-10-08T16:14:16Z<p>Slngew: </p>
<hr />
<div>Use this format (Don’t remove Project 0)<br />
<br />
Project # 0 Group members:<br />
<br />
Last name, First name<br />
<br />
Last name, First name<br />
<br />
Last name, First name<br />
<br />
Last name, First name<br />
<br />
Title: Making a String Telephone<br />
<br />
Description: We use paper cups to make a string phone and talk with friends while learning about sound waves with this science project. (Explain your project in one or two paragraphs).<br />
<br />
--------------------------------------------------------------------<br />
Project # 1 Group members:<br />
<br />
Feng, Jared<br />
<br />
Huang, Xipeng<br />
<br />
Xu, Mingwei<br />
<br />
Yu, Tingzhou<br />
<br />
Title: <br />
<br />
Description:<br />
--------------------------------------------------------------------<br />
Project # 2 Group members:<br />
<br />
Anderson, Eric<br />
<br />
Wang, Chengzhi<br />
<br />
Zhong, Kai<br />
<br />
Zhou, Yi Jing<br />
<br />
Title: Application of Neural Networks<br />
<br />
Description: Using neural networks to determine content/intent of emails.<br />
<br />
--------------------------------------------------------------------<br />
Project # 3 Group members:<br />
<br />
Chopra, Kanika<br />
<br />
Rajcoomar, Yush<br />
<br />
Title: Classification<br />
<br />
Description: We will be working on the alternate project that the Professor will release on Sunday<br />
<br />
--------------------------------------------------------------------<br />
Project # 4 Group members:<br />
<br />
Zhang, Bowen<br />
<br />
Li, Shaozhong<br />
<br />
Kerr, Hannah<br />
<br />
Wong, Ann gie<br />
<br />
Title: Classification<br />
<br />
Description: TBD<br />
<br />
--------------------------------------------------------------------<br />
Project # 5 Group members:<br />
<br />
Chin, Jessie Man Wai<br />
<br />
Ooi, Yi Lin<br />
<br />
Shi, Yaqi<br />
<br />
Ngew, Shwen Lyng<br />
<br />
Title: The Application of Classification in Accelerated Underwriting (Insurance)<br />
<br />
Description: Accelerated Underwriting (AUW), also called “express underwriting,” is a faster and easier process for people with good health condition to obtain life insurance. The traditional underwriting process is often painful for both customers and insurers. From the customer's perspective, they have to complete different types of questionnaires and provide different medical tests involving blood, urine, saliva and other medical results. Underwriters on the other hand have to manually go through every single policy to access the risk of each applicant. AUW allows people, who are deemed “healthy” to forgo medical exams. Since COVID-19, it has become a more concerning topic as traditional underwriting cannot be performed due to the stay-at-home order. However, this imposes a burden on the insurance company to better estimate the risk associated with less testing results. <br />
<br />
This is where data science kicks in. With different classification methods, we can address the underwriting process’ five pain points: labor, speed, efficiency, pricing and mortality. This allows us to better estimate the risk and classify the clients for whether they are eligible for accelerated underwriting. For the final project, we use the data from one of the leading US insurers to analyze how we can classify our clients for AUW using the method of classification. We will be using factors such as health data, medical history, family history as well as insurance history to determine the eligibility.<br />
<br />
--------------------------------------------------------------------<br />
Project # 6 Group members:<br />
<br />
Wang, Carolyn<br />
<br />
Cyrenne, Ethan<br />
<br />
Nguyen, Dieu Hoa<br />
<br />
Sin, Mary Jane<br />
<br />
Title: TBD<br />
<br />
Description: TBD<br />
<br />
--------------------------------------------------------------------<br />
Project # 7 Group members:<br />
<br />
Bhattacharya, Vaibhav<br />
<br />
Chatoor, Amanda<br />
<br />
Prathap Das, Sutej<br />
<br />
Title: PetFinder.my - Pawpularity Contest [https://www.kaggle.com/c/petfinder-pawpularity-score/overview]<br />
<br />
Description: In this competition, we will analyze raw images and metadata to predict the “Pawpularity” of pet photos. We'll train and test our model on PetFinder.my's thousands of pet profiles.<br />
<br />
--------------------------------------------------------------------<br />
Project # 8 Group members:<br />
<br />
Xu, Siming<br />
<br />
Yan, Xin<br />
<br />
Duan, Yishu<br />
<br />
Di, Xibei<br />
<br />
Title: TBD<br />
<br />
Description: TBD<br />
--------------------------------------------------------------------<br />
Project # 9 Group members:<br />
<br />
Loke, Chun Waan<br />
<br />
Chong, Peter<br />
<br />
Osmond, Clarice<br />
<br />
Li, Zhilong<br />
<br />
Title: TBD<br />
<br />
Description: TBD<br />
<br />
--------------------------------------------------------------------<br />
<br />
Project # 10 Group members:<br />
<br />
O'Farrell, Ethan<br />
<br />
D'Astous, Justin<br />
<br />
Hamed, Waqas<br />
<br />
Vladusic, Stefan<br />
<br />
Title: Pawpularity (Kaggle)<br />
<br />
Description: Predicting the popularity of animal photos based on photo metadata<br />
--------------------------------------------------------------------<br />
Project # 11 Group members:<br />
<br />
JunBin, Pan<br />
<br />
Title: TBD<br />
<br />
Description: TBD<br />
--------------------------------------------------------------------<br />
Project # 12 Group members:<br />
<br />
Kar Lok, Ng<br />
<br />
Muhan (Iris), Li<br />
<br />
Wu, Mingze<br />
<br />
Title: NFL Health & Safety - Helmet Assignment competition (Kaggle Competition)<br />
<br />
Description: Assigning players to the helmet in a given footage of head collision in football play.<br />
--------------------------------------------------------------------<br />
Project # 13 Group members:<br />
<br />
Livochka, Anastasiia<br />
<br />
Wong, Cassandra<br />
<br />
Evans, David<br />
<br />
Yalsavar, Maryam<br />
<br />
Title: TBD<br />
<br />
Description: TBD<br />
--------------------------------------------------------------------<br />
Project # 14 Group Members:<br />
<br />
Syamala, Aavinash Reddy<br />
<br />
Zhu, Jigang<br />
<br />
Title: TBD<br />
<br />
Description: TBD<br />
--------------------------------------------------------------------<br />
Project # 15 Group Members:<br />
<br />
Zeng, Mingde<br />
<br />
Lin, Xiaoyu<br />
<br />
Fan, Joshua<br />
<br />
Rao, Chen Min<br />
<br />
Title: TBD<br />
<br />
Description: TBD<br />
--------------------------------------------------------------------<br />
Project # 16 Group Members:<br />
<br />
Huang, Yuying<br />
<br />
Anugu, Ankitha<br />
<br />
Dave, Meet Hemang<br />
<br />
Chen, Yushan<br />
<br />
Title: TBD<br />
<br />
Description: TBD<br />
--------------------------------------------------------------------<br />
Project # 17 Group Members:<br />
<br />
Wang, Lingshan<br />
<br />
Liu, Ziyi<br />
<br />
Zheng, Hanxi<br />
<br />
Li, Yifan<br />
<br />
Title: Implement and Improve CNN in Multi-Class Text Classification<br />
<br />
Description: We are going to apply Convolutional Neural Network (CNN) to classify real-world data (application to build an efficient insurance contract classifier) and improve CNN algorithm-wise in the context of text classification, being supported with real-world data set. With the implementation of CNN, it allows us to further analyze the efficiency and practicality of the algorithm.<br />
The dataset is composed of insurance contracts containing client and policy information. We will implement a multi-class classification to break down the information contained in each insurance contract into some pre-determined subcategories (eg, short-term renewable/long-term non-renewable). We will attempt to process the complicated data into several data types(e.g. JSON, pandas data frames, etc.) and choose the most efficient raw data processing logic based on runtime and algorithm optimization.</div>Slngewhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=F21-STAT_441/841_CM_763-Proposal&diff=49946F21-STAT 441/841 CM 763-Proposal2021-10-05T02:27:41Z<p>Slngew: </p>
<hr />
<div>Use this format (Don’t remove Project 0)<br />
<br />
Project # 0 Group members:<br />
<br />
Last name, First name<br />
<br />
Last name, First name<br />
<br />
Last name, First name<br />
<br />
Last name, First name<br />
<br />
Title: Making a String Telephone<br />
<br />
Description: We use paper cups to make a string phone and talk with friends while learning about sound waves with this science project. (Explain your project in one or two paragraphs).<br />
<br />
--------------------------------------------------------------------<br />
Project # 1 Group members:<br />
<br />
Feng, Jared<br />
<br />
Huang, Xipeng<br />
<br />
Xu, Mingwei<br />
<br />
Yu, Tingzhou<br />
<br />
Title: <br />
<br />
Description:<br />
--------------------------------------------------------------------<br />
Project # 2 Group members:<br />
<br />
Anderson, Eric<br />
<br />
Wang, Chengzhi<br />
<br />
Zhong, Kai<br />
<br />
Zhou, Yi Jing<br />
<br />
Title: Application of Neural Networks<br />
<br />
Description: To be filled in before Oct 8th.<br />
<br />
--------------------------------------------------------------------<br />
Project # 3 Group members:<br />
<br />
Chopra, Kanika<br />
<br />
Rajcoomar, Yush<br />
<br />
Title: TBD <br />
<br />
Description: TBD<br />
<br />
--------------------------------------------------------------------<br />
Project # 4 Group members:<br />
<br />
Zhang, Bowen<br />
<br />
Li, Shaozhong<br />
<br />
Kerr, Hannah<br />
<br />
Wong, Ann gie<br />
<br />
Title: Classification<br />
<br />
Description: TBD<br />
<br />
--------------------------------------------------------------------<br />
Project # 5 Group members:<br />
<br />
Chin, Jessie Man Wai<br />
<br />
Ooi, Yi Lin<br />
<br />
Shi, Yaqi<br />
<br />
Ngew, Shwen Lyng<br />
<br />
Title: TBD<br />
<br />
Description: TBD</div>Slngewhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=F21-STAT_441/841_CM_763-Proposal&diff=49944F21-STAT 441/841 CM 763-Proposal2021-10-05T01:55:33Z<p>Slngew: </p>
<hr />
<div>Use this format (Don’t remove Project 0)<br />
<br />
Project # 0 Group members:<br />
<br />
Last name, First name<br />
<br />
Last name, First name<br />
<br />
Last name, First name<br />
<br />
Last name, First name<br />
<br />
Title: Making a String Telephone<br />
<br />
Description: We use paper cups to make a string phone and talk with friends while learning about sound waves with this science project. (Explain your project in one or two paragraphs).<br />
<br />
--------------------------------------------------------------------<br />
Project # 1 Group members:<br />
<br />
Feng, Jared<br />
<br />
Huang, Xipeng<br />
<br />
Xu, Mingwei<br />
<br />
Yu, Tingzhou<br />
<br />
Title: <br />
<br />
Description:<br />
--------------------------------------------------------------------<br />
Project # 2 Group members:<br />
<br />
Anderson, Eric<br />
<br />
Wang, Chengzhi<br />
<br />
Zhong, Kai<br />
<br />
Zhou, Yi Jing<br />
<br />
Title: Application of Neural Networks<br />
<br />
Description: To be filled in before Oct 8th.<br />
<br />
--------------------------------------------------------------------<br />
Project # 3 Group members:<br />
<br />
Chopra, Kanika<br />
<br />
Rajcoomar, Yush<br />
<br />
Title: TBD <br />
<br />
Description: TBD<br />
<br />
--------------------------------------------------------------------<br />
Project # 4 Group members:<br />
<br />
Zhang, Bowen<br />
<br />
Li, Shaozhong<br />
<br />
Kerr, Hannah<br />
<br />
Wong, Ann gie<br />
<br />
Title: Classification<br />
<br />
Description: TBD<br />
<br />
--------------------------------------------------------------------<br />
Project # 5 Group members:<br />
<br />
Jessie Man Wai, Chin<br />
<br />
Yi Lin, Ooi<br />
<br />
Yaqi, Shi<br />
<br />
Shwen Lyng, Ngew<br />
<br />
Title: TBD<br />
<br />
Description: TBD</div>Slngew