http://wiki.math.uwaterloo.ca/statwiki/api.php?action=feedcontributions&user=Ag3wong&feedformat=atomstatwiki - User contributions [US]2024-03-28T18:37:57ZUser contributionsMediaWiki 1.41.0http://wiki.math.uwaterloo.ca/statwiki/index.php?title=F21-STAT_441/841_CM_763-Proposal&diff=51229F21-STAT 441/841 CM 763-Proposal2021-12-13T02:39:04Z<p>Ag3wong: </p>
<hr />
<div>Use this format (Don’t remove Project 0)<br />
<br />
Project # 0 Group members:<br />
<br />
Last name, First name<br />
<br />
Last name, First name<br />
<br />
Last name, First name<br />
<br />
Last name, First name<br />
<br />
Title: Making a String Telephone<br />
<br />
Description: We use paper cups to make a string phone and talk with friends while learning about sound waves with this science project. (Explain your project in one or two paragraphs).<br />
<br />
--------------------------------------------------------------------<br />
Project # 1 Group members:<br />
<br />
Feng, Jared<br />
<br />
Huang, Xipeng<br />
<br />
Xu, Mingwei<br />
<br />
Yu, Tingzhou<br />
<br />
Title: Patch-Based Convolutional Neural Network for Cancers Classification<br />
<br />
Description: In this project, we consider classifying three classes (tumor types) of cancers based on pathological data. We will follow the paper ''Patch-based Convolutional Neural Network for Whole Slide Tissue Image Classification''.<br />
--------------------------------------------------------------------<br />
Project # 2 Group members:<br />
<br />
Anderson, Eric<br />
<br />
Wang, Chengzhi<br />
<br />
Zhong, Kai<br />
<br />
Zhou, Yi Jing<br />
<br />
Title: Data Poison Attacks<br />
<br />
Description: Attempting to create a successful data poisoning attack<br />
<br />
--------------------------------------------------------------------<br />
Project # 3 Group members:<br />
<br />
Chopra, Kanika<br />
<br />
Rajcoomar, Yush<br />
<br />
Bhattacharya, Vaibhav<br />
<br />
Title: Cancer Classification<br />
<br />
Description: We will be classifying three tumour types based on pathological data. <br />
<br />
--------------------------------------------------------------------<br />
Project # 4 Group members:<br />
<br />
Li, Shao Zhong<br />
<br />
Kerr, Hannah <br />
<br />
Wong, Ann Gie<br />
<br />
Title: Predicting "Pawpularity" of Pets with Image Regression<br />
<br />
Description: Analyze raw images and metadata to predict the “Pawpularity” of pet photos to help guide shelters and rescuers around the world improve the appeal of their pet profiles, so that more animals can get adopted and animals can find their "furever" home faster.<br />
<br />
--------------------------------------------------------------------<br />
Project # 5 Group members:<br />
<br />
Chin, Jessie Man Wai<br />
<br />
Ooi, Yi Lin<br />
<br />
Shi, Yaqi<br />
<br />
Ngew, Shwen Lyng<br />
<br />
Title: The Application of Classification in Accelerated Underwriting (Insurance)<br />
<br />
Description: Accelerated Underwriting (AUW), also called “express underwriting,” is a faster and easier process for people with good health condition to obtain life insurance. The traditional underwriting process is often painful for both customers and insurers. From the customer's perspective, they have to complete different types of questionnaires and provide different medical tests involving blood, urine, saliva and other medical results. Underwriters on the other hand have to manually go through every single policy to access the risk of each applicant. AUW allows people, who are deemed “healthy” to forgo medical exams. Since COVID-19, it has become a more concerning topic as traditional underwriting cannot be performed due to the stay-at-home order. However, this imposes a burden on the insurance company to better estimate the risk associated with less testing results. <br />
<br />
This is where data science kicks in. With different classification methods, we can address the underwriting process’ five pain points: labor, speed, efficiency, pricing and mortality. This allows us to better estimate the risk and classify the clients for whether they are eligible for accelerated underwriting. For the final project, we use the data from one of the leading US insurers to analyze how we can classify our clients for AUW using the method of classification. We will be using factors such as health data, medical history, family history as well as insurance history to determine the eligibility.<br />
<br />
--------------------------------------------------------------------<br />
Project # 6 Group members:<br />
<br />
Wang, Carolyn<br />
<br />
Cyrenne, Ethan<br />
<br />
Nguyen, Dieu Hoa<br />
<br />
Sin, Mary Jane<br />
<br />
Title: Pawpularity (PetFinder Kaggle Competition)<br />
<br />
Description: Using images and metadata on the images to predict the popularity of pet photos, which is calculated based on page view statistics and other metrics from the PetFinder website.<br />
<br />
--------------------------------------------------------------------<br />
Project # 7 Group members:<br />
<br />
Bhattacharya, Vaibhav<br />
<br />
Chatoor, Amanda<br />
<br />
Prathap Das, Sutej<br />
<br />
Title: PetFinder.my - Pawpularity Contest [https://www.kaggle.com/c/petfinder-pawpularity-score/overview]<br />
<br />
Description: In this competition, we will analyze raw images and metadata to predict the “Pawpularity” of pet photos. We'll train and test our model on PetFinder.my's thousands of pet profiles.<br />
<br />
--------------------------------------------------------------------<br />
Project # 8 Group members:<br />
<br />
Yan, Xin<br />
<br />
Duan, Yishu<br />
<br />
Di, Xibei<br />
<br />
Title: The application of classification on company bankruptcy prediction<br />
<br />
Description: TBD<br />
--------------------------------------------------------------------<br />
Project # 9 Group members:<br />
<br />
Loke, Chun Waan<br />
<br />
Chong, Peter<br />
<br />
Osmond, Clarice<br />
<br />
Li, Zhilong<br />
<br />
Title: TBD<br />
<br />
Description: TBD<br />
<br />
--------------------------------------------------------------------<br />
<br />
Project # 10 Group members:<br />
<br />
O'Farrell, Ethan<br />
<br />
D'Astous, Justin<br />
<br />
Hamed, Waqas<br />
<br />
Vladusic, Stefan<br />
<br />
Title: Pawpularity (Kaggle)<br />
<br />
Description: Predicting the popularity of animal photos based on photo metadata<br />
--------------------------------------------------------------------<br />
Project # 11 Group members:<br />
<br />
JunBin, Pan<br />
<br />
Title: TBD<br />
<br />
Description: TBD<br />
--------------------------------------------------------------------<br />
Project # 12 Group members:<br />
<br />
Kar Lok, Ng<br />
<br />
Muhan (Iris), Li<br />
<br />
Wu, Mingze<br />
<br />
Title: NFL Health & Safety - Helmet Assignment competition (Kaggle Competition)<br />
<br />
Description: Assigning players to the helmet in a given footage of head collision in football play.<br />
--------------------------------------------------------------------<br />
Project # 13 Group members:<br />
<br />
Livochka, Anastasiia<br />
<br />
Wong, Cassandra<br />
<br />
Evans, David<br />
<br />
Yalsavar, Maryam<br />
<br />
Title: TBD<br />
<br />
Description: TBD<br />
--------------------------------------------------------------------<br />
Project # 14 Group Members:<br />
<br />
Zeng, Mingde<br />
<br />
Lin, Xiaoyu<br />
<br />
Fan, Joshua<br />
<br />
Rao, Chen Min<br />
<br />
Title: Toxic Comment Classification, Kaggle<br />
<br />
Description: Using Wikipedia comments labeled for toxicity to train a model that detects toxicity in comments.<br />
--------------------------------------------------------------------<br />
Project # 15 Group Members:<br />
<br />
Huang, Yuying<br />
<br />
Anugu, Ankitha<br />
<br />
Chen, Yushan<br />
<br />
Title: Implementation of the classification task between crop and weeds<br />
<br />
Description: Our work will be based on the paper ''Crop and Weeds Classification for Precision Agriculture using Context-Independent Pixel-Wise Segmentation''.<br />
--------------------------------------------------------------------<br />
Project # 16 Group Members:<br />
<br />
Wang, Lingshan<br />
<br />
Li, Yifan<br />
<br />
Liu, Ziyi<br />
<br />
Title: Implement and Improve CNN in Multi-Class Text Classification<br />
<br />
Description: We are going to apply Bidirectional Encoder Representations from Transformers (BERT) to classify real-world data (application to build an efficient case study interview materials classifier) and improve it algorithm-wise in the context of text classification, being supported with real-world data set. With the implementation of BERT, it allows us to further analyze the efficiency and practicality of the algorithm when dealing with imbalanced dataset in the data input level and modelling level.<br />
The dataset is composed of case study HTML files containing case information that can be classified into multiple industry categories. We will implement a multi-class classification to break down the information contained in each case material into some pre-determined subcategories (eg, behavior questions, consulting questions, questions for new business/market entry, etc.). We will attempt to process the complicated data into several data types(e.g. HTML, JSON, pandas data frames, etc.) and choose the most efficient raw data processing logic based on runtime and algorithm optimization.<br />
--------------------------------------------------------------------<br />
Project # 17 Group members:<br />
<br />
Malhi, Dilmeet<br />
<br />
Joshi, Vansh<br />
<br />
Syamala, Aavinash <br />
<br />
Islam, Sohan<br />
<br />
Title: Kaggle project: PetFinder.my - Pawpularity Contest<br />
<br />
Description: In this competition, we will analyze raw images provided by PetFinder.my to predict the “Pawpularity” of pet photos.<br />
--------------------------------------------------------------------<br />
<br />
Project # 18 Group members:<br />
<br />
Yuwei, Liu<br />
<br />
Daniel, Mao<br />
<br />
Title: Sartorius - Cell Instance Segmentation (Kaggle) [https://www.kaggle.com/c/sartorius-cell-instance-segmentation]<br />
<br />
Description: Detect single neuronal cells in microscopy images<br />
<br />
--------------------------------------------------------------------<br />
<br />
Project #19 Group members:<br />
<br />
Samuel, Senko<br />
<br />
Tyler, Verhaar<br />
<br />
Zhang, Bowen<br />
<br />
Title: NBA Game Prediction<br />
<br />
Description: We will build a win/loss classifier for NBA games using player and game data and also incorporating alternative data (ex. sports betting data).<br />
<br />
-------------------------------------------------------------------<br />
<br />
Project #20 Group members:<br />
<br />
Mitrache, Christian<br />
<br />
Renggli, Aaron<br />
<br />
Saini, Jessica<br />
<br />
Mossman, Alexandra<br />
<br />
Title: Classification and Deep Learning for Healthcare Provider Fraud Detection Analysis<br />
<br />
Description: TBD<br />
<br />
--------------------------------------------------------------------<br />
<br />
Project # 21 Group members:<br />
<br />
Wang, Kun<br />
<br />
Title: TBD<br />
<br />
Description : TBD<br />
<br />
--------------------------------------------------------------------<br />
<br />
Project # 22 Group members:<br />
<br />
Guray, Egemen<br />
<br />
Title: Traffic Sign Recognition System (TSRS): SVM and Convolutional Neural Network<br />
<br />
Description : I will build a prediction system to predict road signs in the German Traffic Sign Dataset using CNN.<br />
--------------------------------------------------------------------<br />
<br />
Project # 23 Group members:<br />
<br />
Bsodjahi<br />
<br />
Title: Modeling Pseudomonas aeruginosa bacteria state through its genes expression activity<br />
<br />
Description : Label Pseudomonas aeruginosa gene expression data through unsupervised learning (eg., EM algorithm) and then model the bacterial state as function of its genes expression</div>Ag3wonghttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=The_Detection_of_Black_Ice_Accidents_Using_CNNs&diff=50268The Detection of Black Ice Accidents Using CNNs2021-11-14T19:54:14Z<p>Ag3wong: /* Data collection */</p>
<hr />
<div>== Presented by == <br />
<br />
Ann Gie Wong, Hannah Kerr, Shao Zhong Li<br />
<br />
== Introduction ==<br />
As automated vehicles become more popular it is critical for these cars to be tested on every realistic driving scenario. Since AVs aim to improve safety on the road they must be able to handle all kinds of road conditions. One way an AV can prevent an accident is going from a passive safety system to an active safety system once a risk is identified. <br />
<br />
Every country has their own challenges and in Canada for example, AVs need to understand how to drive in the winter. However, not enough testing and training has been done to mitigate winter risks. Black ice is one of the leading causes of accidents in the winter and is very challenging to see since it is a thin, transparent layer of ice. Because of this, focus needs to be placed on AVs identifying black ice.<br />
<br />
== Previous Work ==<br />
In the past other methods of detecting black ice included using:<br />
<br />
<ul><br />
<li> Sensors </li><br />
<ul><br />
<li> Electric current sensors imbedded in concrete<br />
<li> Change of electrical current resistance between stainless steel columns inside the concrete based on how what is on top of the road<br />
</ul><br />
<li> Sound Waves:<br />
<ul><br />
<li> Used 3 different soundwaves<br />
<li> Road conditions detected through reflectance of the waves <br />
<li> To be used for basic data in the development of road condition detectors <br />
</ul><br />
<li> Light Sources<br />
<ul><br />
<li> Different road conditions have unique light reflection<br />
<li> Specular and diffuse reflections<br />
<li> Types of ice were classified based on thickness and volume<br />
<li> Other road conditions could be determined through reflection as well </li><br />
</ul><br />
</ul><br />
<br />
Transportation in general has been using artificial intelligence for many different purposes. <br />
<br />
Vehicle and pedestrian detection has been using various forms of convolutional neural networks like AlexNet, YOLO, R-CNN, Faster R-CNN, etc. Some models had better performance whereas others had a faster processing time but overall great success has been achieved. <br />
<br />
In addition, the identification of traffic signs has had studies using similar CNN structures. These algorithms are able to process high-definition images quickly and recognize the boundary of the traffic sign allowing for quick processing. <br />
<br />
Lastly, the detection of cracks in the road used CNN algorithms to identify the existence of a crack and classifying the it’s length with a maximum misclassification of 1cm. <br />
<br />
<br />
Significant progress has been made for transportation but there is a lack of training on winter roads and black ice specifically. Since CNN has great success with quickly identifying objects of interest in images, using CNN for black ice detection and accident prevention is a natural extension.<br />
<br />
== Data collection ==<br />
<br />
CNN is a popular class of Artificial Neural Networks (ANN) that is commonly used in image analysis due to its excellent performance in object detection using images.It differs from ANN in that it maintains and delivers spatial information on images by adding synthetic and pooling layers to a normal ANN.<br />
As mentioned earlier, various studies regarding the transportation sector had used CNN, but the study of black ice detection on the road has only thus far been conducted using other methodologies (sensors and optics).<br />
This study aims to detect black ice by utilizing CNN on images of various road conditions..<br />
In this chapter, the details of data collection, 1st preprocessing, and 2nd preprocessing, how the model was designed, and the training undertaken (see Figure 1) are discussed.<br />
<br />
[[File:DBIAPAVUCNN figure 1.png]]<br />
<br />
1. Data Collection<br />
<br />
Image data was collected using Google Image Search for four categories of road condition: road, wet road, snow road and black ice. Images were of different regions and road environments and make up a total of 2230 images.<br />
<br />
[[File:DBIAPAVUCNN table 1.png]]<br />
<br />
2. Data Split<br />
<br />
To assist in feature extraction, objects such as road structures, lanes, and shoulders within each image were removed so that the road characteristics of interest can be clearly identified. <br />
<br />
Consideration was given in the decision of the image size by weighing the pros and cons. In general, making images smaller will cause a loss of information. However, smaller image sizes allow for a larger number of images and deep neural network implementations. On the other hand, when the image size is large, feature extraction can be more accurate as the finer features are not lost, and the network can learn more robust features, but the disadvantage is that the number of images is reduced, and a deep neural network is difficult to implement. In this study, a 128 x 128 px size is selected to proceed with training. The results of the data split are shown in Figure 2.<br />
<br />
[[File:DBIAPAVUCNN table 2.png]]<br />
<br />
<b> 1st Preprocessing </b><br />
<br />
In the 1st stage of Preprocessing, the channel was set up and data padding was performed on the training data.<br />
<br />
1. Channel Setup<br />
<br />
The color image of 128 × 128 px obtained earlier through data split has the advantage of having three channels available to help identify the characteristics. However, because of the three channels of data, the size of the data is large, which limits the number of training data and the implementation of deep neural networks. Therefore, this study has transformed the data into grayscale image data.<br />
<br />
[[File:DBIAPAVUCNN table 3.png]]<br />
<br />
2. Data padding<br />
<br />
Data padding is used to resize training images by adding spaces and meaningless symbols to the end of existing data. When training was done without data padding, very low accuracy (25%) and high loss values were achieved (Table 4). This is because the edges of the image data are distorted by the data enhancement.<br />
<br />
Therefore, in this study, the image data were padded to prevent distortion of the edges of the data.<br />
<br />
[[File:DBIAPAVUCNN table 4.png]]<br />
<br />
<b> 2nd preprocessing </b><br />
<br />
After the 1st preprocessing, during which channel setup and data padding were performed, image data of 150 x 150 px in GRAYSCALE format were obtained with the following categories: 4900 road and wet road image data and 3900 snow road and black ice image data (Table 5).<br />
<br />
[[File:DBIAPAVUCNN table 5.png]]<br />
<br />
3. Data Augmentation<br />
<br />
In the 2nd preprocessing stage, to improve the diversity in the image data obtained through Google Image Search, additional image data was created through data augmentation on existing image data.<br />
<br />
This is done in hopes to improve the accuracy of the model since large amounts of data are essential for high accuracy and prevention of overfitting. <br />
<br />
Data augmentation would help greatly, especially for this study, which aims to identify black ice, which is not only seasonal but also reliant on very specific conditions to form, thus making image data on black ice more sparse relative to other types of data. To improve the accuracy of CNN, the ImageDataGenerator function provided by the Keras library was used to augment the data under the conditions in (Table 6).<br />
<br />
[[File:DBIAPAVUCNN table 6.png]]<br />
<br />
The process of building the training data through data augmentation is as follows.<br />
<br />
From the original 17,600 sheets of data, 1000 were randomly extracted from each class and designated as test data. The rest of the data, which is the training data, was augmented using the ImageDataGenerator function, which increases the total number of images to 10,000 per class. Then, from there, the data was split into the train data and validation data at a ratio of 8:2. Therefore, the final ratio of train, validation, and test data for each class was 8:2:1. (Figure 3 and Table 7).<br />
<br />
[[File:DBIAPAVUCNN table 7.png]]<br />
<br />
== Model Architecture ==<br />
<br />
The model structure consists of 2 main components: Feature extraction and Classification. The Feature extraction component can be broken down into 2 sections. The model begins with 2 convolutional layers, using a 3x3 kernel size, paired with the ReLU activation function to avoid vanishing gradients. The goal of the convolutional layers is to extract the main features from the input image like edges, orientation, color, and other important features to distinguish black ice. It is followed by 2 max-pooling layers with a 2x2 stride. The pooling layers map the grid of values in each window to a single output value, reducing the output size of the convolutional layers. This allows the most relevant features to be picked out while reducing the amount of computation needed downstream. The max-pooling operation is used which will yield only the maximum value out of each window. A 20% dropout layer is then used, which randomly “drops” 20% of the weights from the previous convolutional layers during training which aims to improve generalization and avoid over-fitting. This structure is then repeated, making up the first component of the Feature Extraction workflow. <br />
<br />
The previous layout is then repeated but with 1 convolutional layer, followed by one max-pooling layer, instead of 2, and one Dropout layer with the same parameters. <br />
<br />
The classification component of the architecture consists of fully connected layers feeding into a softmax’ed output. There are 4 fully-connected layers with 3 dropout layers in between. <br />
<br />
Finally, the Stochastic Gradient Descent Optimizer was used, and 200 epochs were applied using a batch-size of 32. The model training is stopped if the validation loss does not fall below the minimum value encountered so far within 20 epochs.<br />
<br />
[[File:DBIAPAVUCNN figure 4.png]]<br />
<br />
== Results ==<br />
<br />
The loss reported with the optimal model was found to be 0.008 and 0.097 with accuracy 0.998 and 0.982. A loss/accuracy graph over time and confusion matrix is included as follows. The confusion matrix evaluates the accuracy rate between each 2 pairs of the 4 classes. This allows us to analyze patterns and mis-classification behaviour by the model between each 2 pairs. <br />
<br />
[[File:DBIAPAVUCNN figure 5.png]]<br />
<br />
[[File:DBIAPAVUCNN table 9.png]]<br />
<br />
Moreover, the precision and recall rates are reported for a more holistic view of the model’s performance for each class. We see that the model produces a low amount of false positives and false negatives, scoring quite high on precision and recall for each of the 3 classes.<br />
<br />
[[File:DBIAPAVUCNN table 10.png]]<br />
<br />
== Conclusion ==<br />
<br />
In this study, CNN was used to detect black ice which can be difficult to detect by the naked eye, with the goal to prevent black ice accidents in AVs. Data were collected and classified into four classes, and the train, validation, and test data of each class were obtained after data pre-processing was performed in the order of data split, data padding, and data augmentation.<br />
<br />
Unlike the DCNN model, the CNN model proposed in this study was designed to be relatively simple yet robust, with an accuracy of about 96%.<br />
<br />
This study is significant in that black ice, which is a significant risk factor even in the era of AVs, was detected using AI, not sensors and wavelengths. It is expected that this will prevent black ice accidents of AVs and will be used as basic data for future convergence research.<br />
<br />
Overall, the CNN-based black ice detection method can be applied through deploying the CNN process on AVs and CCTVs as part of an early-warning system. Approaching vehicles can then be made aware in advance the possibility of black ice presence in the area, and the drivers will be able to take preventative measures like slowing down, more careful steering, etc. <br />
<br />
== Critiques ==<br />
<br />
Due to the choice of using GRAYSCALE images of black ice, which is mainly formed at dawn, the resulting model has a tendency to get some classes mixed up due to the loss of light characteristics in the training data. The shimmers found in the snow road due to light reflection are absent when the same image is converted into grayscale, resulting in the model not able to correctly distinguish one from the other. Therefore, further research needs to be conducted to find the optimal neural network that utilizes RGB images to detect black ice.<br />
<br />
Also, since the data were collected through Google Image Search, only images that are taken relatively close to the road are used in the training of the model. Therefore, further research needs to be conducted to construct a CNN model applicable to various situations by varying the distance and angle to the road to be detected [48–50] in the image data.<br />
<br />
== References ==<br />
<br />
[1] Hojun Lee, Minhee Kang, Jaein Song, and Keeyeon Hwang. The Detection of Black Ice Accidents for Preventative Automated Vehicles Using Convolutional Neural Networks. 2020.</div>Ag3wonghttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat441F21&diff=50066stat441F212021-11-09T01:15:24Z<p>Ag3wong: /* Paper presentation */</p>
<hr />
<div><br />
<br />
== [[F20-STAT 441/841 CM 763-Proposal| Project Proposal ]] ==<br />
<br />
<!--[https://goo.gl/forms/apurag4dr9kSR76X2 Your feedback on presentations]--><br />
<br />
=Paper presentation=<br />
{| class="wikitable"<br />
<br />
{| border="1" cellpadding="3"<br />
|-<br />
|width="60pt"|Date<br />
|width="250pt"|Name <br />
|width="15pt"|Paper number <br />
|width="700pt"|Title<br />
|width="15pt"|Link to the paper<br />
|width="30pt"|Link to the summary<br />
|width="30pt"|Link to the video<br />
|-<br />
|Sep 15 (example)||Ri Wang || ||Sequence to sequence learning with neural networks.||[http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Going_Deeper_with_Convolutions Summary] || [https://youtu.be/JWozRg_X-Vg?list=PLehuLRPyt1HzXDemu7K4ETcF0Ld_B5adG&t=539]<br />
|-<br />
|Week of Nov 16 || Ali Ghodsi || || || || ||<br />
|-<br />
|Week of Nov 16 || Jared Feng, Xipeng Huang, Mingwei Xu, Tingzhou Yu|| || Don't Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification || [http://proceedings.mlr.press/v139/bai21c/bai21c.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Going_Deeper_with_Convolutions Summary] ||<br />
|-<br />
|Week of Nov 16 || Kanika Chopra, Yush Rajcoomar || || Automatic Bank Fraud Detection Using Support Vector Machines || [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.863.5804&rep=rep1&type=pdf Paper] || ||<br />
|-<br />
|Week of Nov 22 || Zeng Mingde, Lin Xiaoyu, Fan Joshua, Rao Chen Min || || || || ||<br />
|-<br />
|Week of Nov 22 || Justin D'Astous, Waqas Hamed, Stefan Vladusic, Ethan O'Farrell || || A Probabilistic Approach to Neural Network Pruning || [http://proceedings.mlr.press/v139/qian21a/qian21a.pdf] || ||<br />
|-<br />
|Week of Nov 22 || Cassandra Wong, Anastasiia Livochka, Maryam Yalsavar, David Evans || || Patch-Based Convolutional Neural Network for Whole Slide Tissue Image Classification || [https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Hou_Patch-Based_Convolutional_Neural_CVPR_2016_paper.pdf Paper] || ||<br />
|-<br />
|Week of Nov 22 || Jessie Man Wai Chin, Yi Lin Ooi, Yaqi Shi, Shwen Lyng Ngew || || || || ||<br />
|-<br />
|Week of Nov 22 || Eric Anderson, Chengzhi Wang, Kai Zhong, YiJing Zhou || || || || ||<br />
|-<br />
|Week of Nov 29 || Ethan Cyrenne, Dieu Hoa Nguyen, Mary Jane Sin, Carolyn Wang || || || || ||<br />
|-<br />
|Week of Nov 22 || Ann Gie Wong, Curtis Li, Hannah Kerr || || The Detection of Black Ice Accidents for Preventative Automated Vehicles Using Convolutional Neural Networks || [https://www.mdpi.com/2079-9292/9/12/2178/htm Paper] ||[https://wiki.math.uwaterloo.ca/statwiki/index.php?title=The_Detection_of_Black_Ice_Accidents_Using_CNNs&fbclid=IwAR0K4YdnL_hdRnOktmJn8BI6-Ra3oitjJof0YwluZgUP1LVFHK5jyiBZkvQ Summary] ||<br />
|-</div>Ag3wonghttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat441F21&diff=50062stat441F212021-11-09T01:12:02Z<p>Ag3wong: /* Paper presentation */</p>
<hr />
<div><br />
<br />
== [[F20-STAT 441/841 CM 763-Proposal| Project Proposal ]] ==<br />
<br />
<!--[https://goo.gl/forms/apurag4dr9kSR76X2 Your feedback on presentations]--><br />
<br />
=Paper presentation=<br />
{| class="wikitable"<br />
<br />
{| border="1" cellpadding="3"<br />
|-<br />
|width="60pt"|Date<br />
|width="250pt"|Name <br />
|width="15pt"|Paper number <br />
|width="700pt"|Title<br />
|width="15pt"|Link to the paper<br />
|width="30pt"|Link to the summary<br />
|width="30pt"|Link to the video<br />
|-<br />
|Sep 15 (example)||Ri Wang || ||Sequence to sequence learning with neural networks.||[http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Going_Deeper_with_Convolutions Summary] || [https://youtu.be/JWozRg_X-Vg?list=PLehuLRPyt1HzXDemu7K4ETcF0Ld_B5adG&t=539]<br />
|-<br />
|Week of Nov 16 || Ali Ghodsi || || || || ||<br />
|-<br />
|Week of Nov 16 || Jared Feng, Xipeng Huang, Mingwei Xu, Tingzhou Yu|| || Don't Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification || [http://proceedings.mlr.press/v139/bai21c/bai21c.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Going_Deeper_with_Convolutions Summary] ||<br />
|-<br />
|Week of Nov 16 || Kanika Chopra, Yush Rajcoomar || || Automatic Bank Fraud Detection Using Support Vector Machines || [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.863.5804&rep=rep1&type=pdf Paper] || ||<br />
|-<br />
|Week of Nov 22 || Zeng Mingde, Lin Xiaoyu, Fan Joshua, Rao Chen Min || || || || ||<br />
|-<br />
|Week of Nov 22 || Justin D'Astous, Waqas Hamed, Stefan Vladusic, Ethan O'Farrell || || A Probabilistic Approach to Neural Network Pruning || [http://proceedings.mlr.press/v139/qian21a/qian21a.pdf] || ||<br />
|-<br />
|Week of Nov 22 || Cassandra Wong, Anastasiia Livochka, Maryam Yalsavar, David Evans || || Patch-Based Convolutional Neural Network for Whole Slide Tissue Image Classification || [https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Hou_Patch-Based_Convolutional_Neural_CVPR_2016_paper.pdf Paper] || ||<br />
|-<br />
|Week of Nov 22 || Jessie Man Wai Chin, Yi Lin Ooi, Yaqi Shi, Shwen Lyng Ngew || || || || ||<br />
|-<br />
|Week of Nov 22 || Eric Anderson, Chengzhi Wang, Kai Zhong, YiJing Zhou || || || || ||<br />
|-<br />
|Week of Nov 29 || Ethan Cyrenne, Dieu Hoa Nguyen, Mary Jane Sin, Carolyn Wang || || || || ||<br />
|-<br />
|Week of Nov 22 || Ann Gie Wong, Curtis Li, Hannah Kerr || || The Detection of Black Ice Accidents for Preventative<br />
Automated Vehicles Using Convolutional Neural Networks || [https://www.mdpi.com/2079-9292/9/12/2178/htm Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=The_Detection_of_Black_Ice_Accidents_Using_CNNs&fbclid=IwAR0K4YdnL_hdRnOktmJn8BI6-Ra3oitjJof0YwluZgUP1LVFHK5jyiBZkvQ Summary] ||<br />
|-</div>Ag3wonghttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat441F21&diff=50058stat441F212021-11-09T01:02:55Z<p>Ag3wong: /* Paper presentation */</p>
<hr />
<div><br />
<br />
== [[F20-STAT 441/841 CM 763-Proposal| Project Proposal ]] ==<br />
<br />
<!--[https://goo.gl/forms/apurag4dr9kSR76X2 Your feedback on presentations]--><br />
<br />
=Paper presentation=<br />
{| class="wikitable"<br />
<br />
{| border="1" cellpadding="3"<br />
|-<br />
|width="60pt"|Date<br />
|width="250pt"|Name <br />
|width="15pt"|Paper number <br />
|width="700pt"|Title<br />
|width="15pt"|Link to the paper<br />
|width="30pt"|Link to the summary<br />
|width="30pt"|Link to the video<br />
|-<br />
|Sep 15 (example)||Ri Wang || ||Sequence to sequence learning with neural networks.||[http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Going_Deeper_with_Convolutions Summary] || [https://youtu.be/JWozRg_X-Vg?list=PLehuLRPyt1HzXDemu7K4ETcF0Ld_B5adG&t=539]<br />
|-<br />
|Week of Nov 16 || Ali Ghodsi || || || || ||<br />
|-<br />
|Week of Nov 16 || Jared Feng, Xipeng Huang, Mingwei Xu, Tingzhou Yu|| || Don't Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification || [http://proceedings.mlr.press/v139/bai21c/bai21c.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Going_Deeper_with_Convolutions Summary] ||<br />
|-<br />
|Week of Nov 16 || Kanika Chopra, Yush Rajcoomar || || Automatic Bank Fraud Detection Using Support Vector Machines || [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.863.5804&rep=rep1&type=pdf Paper] || ||<br />
|-<br />
|Week of Nov 22 || Zeng Mingde, Lin Xiaoyu, Fan Joshua, Rao Chen Min || || || || ||<br />
|-<br />
|Week of Nov 22 || Justin D'Astous, Waqas Hamed, Stefan Vladusic, Ethan O'Farrell || || A Probabilistic Approach to Neural Network Pruning || [http://proceedings.mlr.press/v139/qian21a/qian21a.pdf] || ||<br />
|-<br />
|Week of Nov 22 || Cassandra Wong, Anastasiia Livochka, Maryam Yalsavar, David Evans || || Patch-Based Convolutional Neural Network for Whole Slide Tissue Image Classification || [https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Hou_Patch-Based_Convolutional_Neural_CVPR_2016_paper.pdf Paper] || ||<br />
|-<br />
|Week of Nov 22 || Jessie Man Wai Chin, Yi Lin Ooi, Yaqi Shi, Shwen Lyng Ngew || || || || ||<br />
|-<br />
|Week of Nov 22 || Eric Anderson, Chengzhi Wang, Kai Zhong, YiJing Zhou || || || || ||<br />
|-<br />
|Week of Nov 29 || Ethan Cyrenne, Dieu Hoa Nguyen, Mary Jane Sin, Carolyn Wang || || || || ||<br />
|-<br />
|Week of Nov 22 || Ann Gie Wong, Curtis Li, Hannah Kerr || || The Detection of Black Ice Accidents for Preventative<br />
Automated Vehicles Using Convolutional Neural Networks || [https://www.mdpi.com/2079-9292/9/12/2178/htm Paper] || ||</div>Ag3wonghttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=stat441F21&diff=50056stat441F212021-11-09T01:02:38Z<p>Ag3wong: </p>
<hr />
<div><br />
<br />
== [[F20-STAT 441/841 CM 763-Proposal| Project Proposal ]] ==<br />
<br />
<!--[https://goo.gl/forms/apurag4dr9kSR76X2 Your feedback on presentations]--><br />
<br />
=Paper presentation=<br />
{| class="wikitable"<br />
<br />
{| border="1" cellpadding="3"<br />
|-<br />
|width="60pt"|Date<br />
|width="250pt"|Name <br />
|width="15pt"|Paper number <br />
|width="700pt"|Title<br />
|width="15pt"|Link to the paper<br />
|width="30pt"|Link to the summary<br />
|width="30pt"|Link to the video<br />
|-<br />
|Sep 15 (example)||Ri Wang || ||Sequence to sequence learning with neural networks.||[http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Going_Deeper_with_Convolutions Summary] || [https://youtu.be/JWozRg_X-Vg?list=PLehuLRPyt1HzXDemu7K4ETcF0Ld_B5adG&t=539]<br />
|-<br />
|Week of Nov 16 || Ali Ghodsi || || || || ||<br />
|-<br />
|Week of Nov 16 || Jared Feng, Xipeng Huang, Mingwei Xu, Tingzhou Yu|| || Don't Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification || [http://proceedings.mlr.press/v139/bai21c/bai21c.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Going_Deeper_with_Convolutions Summary] ||<br />
|-<br />
|Week of Nov 16 || Kanika Chopra, Yush Rajcoomar || || Automatic Bank Fraud Detection Using Support Vector Machines || [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.863.5804&rep=rep1&type=pdf Paper] || ||<br />
|-<br />
|Week of Nov 22 || Zeng Mingde, Lin Xiaoyu, Fan Joshua, Rao Chen Min || || || || ||<br />
|-<br />
|Week of Nov 22 || Justin D'Astous, Waqas Hamed, Stefan Vladusic, Ethan O'Farrell || || A Probabilistic Approach to Neural Network Pruning || [http://proceedings.mlr.press/v139/qian21a/qian21a.pdf] || ||<br />
|-<br />
|Week of Nov 22 || Cassandra Wong, Anastasiia Livochka, Maryam Yalsavar, David Evans || || Patch-Based Convolutional Neural Network for Whole Slide Tissue Image Classification || [https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Hou_Patch-Based_Convolutional_Neural_CVPR_2016_paper.pdf Paper] || ||<br />
|-<br />
|Week of Nov 22 || Jessie Man Wai Chin, Yi Lin Ooi, Yaqi Shi, Shwen Lyng Ngew || || || || ||<br />
|-<br />
|Week of Nov 22 || Eric Anderson, Chengzhi Wang, Kai Zhong, YiJing Zhou || || || || ||<br />
|-<br />
|Week of Nov 29 || Ethan Cyrenne, Dieu Hoa Nguyen, Mary Jane Sin, Carolyn Wang || || || || ||<br />
<br />
|Week of Nov 22 || Ann Gie Wong, Curtis Li, Hannah Kerr || || The Detection of Black Ice Accidents for Preventative<br />
Automated Vehicles Using Convolutional Neural Networks || [https://www.mdpi.com/2079-9292/9/12/2178/htm Paper] || ||</div>Ag3wong