F21-STAT 441/841 CM 763-Proposal: Difference between revisions
No edit summary |
(Added Group 5 Proposal) |
||
Line 95: | Line 95: | ||
We seek to implement the algorithm described in [https://papers.nips.cc/paper/9247-deep-gamblers-learning-to-abstain-with-portfolio-theory.pdf Deep Gamblers: Learning to Abstain with Portfolio Theory]. The paper describes augmenting the classic classification problem to include the option to abstain from making a prediction when uncertainty is high. | We seek to implement the algorithm described in [https://papers.nips.cc/paper/9247-deep-gamblers-learning-to-abstain-with-portfolio-theory.pdf Deep Gamblers: Learning to Abstain with Portfolio Theory]. The paper describes augmenting the classic classification problem to include the option to abstain from making a prediction when uncertainty is high. | ||
---- | |||
'''Project # 5 Group members:''' | |||
Jones, Hayden | |||
Leung, Michael | |||
Haque, Bushra | |||
Mustatea, Cristian | |||
'''Title:''' Combine Convolution with Recurrent Networks for Text Classification | |||
'''Description:''' | |||
Our team chose to reproduce the paper [https://arxiv.org/pdf/2006.15795.pdf Combine Convolution with Recurrent Networks for Text Classification] on Arxiv. The goal of this paper is to combine CNN and RNN architectures in a way that more flexibly combines the output of both architectures other than simple concatenation through the use of a “neural tensor layer” for the purpose of improving at the task of text classification. In particular, the paper claims that their novel architecture excels at the following types of text classification: sentiment analysis, news categorization, and topical classification. Our team plans to recreate this paper by working in pairs of 2, one pair to implement the CNN pipeline and the other pair to implement the RNN pipeline. We will be working with Tensorflow 2, Google Collab, and reproducing the paper’s experimental results with training on the same 6 publicly available datasets found in the paper. |
Revision as of 20:21, 6 October 2020
Use this format (Don’t remove Project 0)
Project # 0 Group members:
Last name, First name
Last name, First name
Last name, First name
Last name, First name
Title: Making a String Telephone
Description: We use paper cups to make a string phone and talk with friends while learning about sound waves with this science project. (Explain your project in one or two paragraphs).
Project # 1 Group members:
Song, Quinn
Loh, William
Bai, Junyue
Choi, Phoebe
Title: APTOS 2019 Blindness Detection
Description:
Our team chose the APTOS 2019 Blindness Detection Challenge from Kaggle. The goal of this challenge is to build a machine learning model that detects diabetic retinopathy by screening retina images.
Millions of people suffer from diabetic retinopathy, the leading cause of blindness among working-aged adults. It is caused by damage to the blood vessels of the light-sensitive tissue at the back of the eye (retina). In rural areas where medical screening is difficult to conduct, it is challenging to detect the disease efficiently. Aravind Eye Hospital hopes to utilize machine learning techniques to gain the ability to automatically screen images for disease and provide information on how severe the condition may be.
Our team plans to solve this problem by applying our knowledge in image processing and classification.
Project # 2 Group members:
Li, Dylan
Li, Mingdao
Lu, Leonie
Sharman,Bharat
Title: Risk prediction in life insurance industry using supervised learning algorithms
Description:
In this project, we aim to replicate and possibly improve upon the work of Jayabalan et al. in their paper “Risk prediction in life insurance industry using supervised learning algorithms”. We will be using the Prudential Life Insurance Data Set that the authors have used and have shared with us. We will be pre-processing the data to replace missing values, using feature selection using CFS and feature reduction using PCA use this processed data to perform Classification via four algorithms – Neural Networks, Random Tree, REPTree and Multiple Linear Regression. We will compare the performance of these Algorithms using MAE and RMSE metrics and come up with visualizations that can explain the results easily even to a non-quantitative audience.
Our goal behind this project is to learn applying the algorithms that we learned in our class to an industry dataset and come up with results that we can aid better, data-driven decision making.
Project # 3 Group members:
Parco, Russel
Sun, Scholar
Yao, Jacky
Zhang, Daniel
Title: Lyft Motion Prediction for Autonomous Vehicles
Description:
Our team has decided to participate in the Lyft Motion Prediction for Autonomous Vehicles Kaggle competition. The aim of this competition is to build a model which given a set of objects on the road (pedestrians, other cars, etc), predict the future movement of these objects.
Autonomous vehicles (AVs) are expected to dramatically redefine the future of transportation. However, there are still significant engineering challenges to be solved before one can fully realize the benefits of self-driving cars. One such challenge is building models that reliably predict the movement of traffic agents around the AV, such as cars, cyclists, and pedestrians.
Our aim is to apply classification techniques learned in class to optimally predict how these objects move.
Project # 4 Group members:
Chow, Jonathan
Dharani, Nyle
Nasirov, Ildar
Title: Classification with Abstinence
Description:
We seek to implement the algorithm described in Deep Gamblers: Learning to Abstain with Portfolio Theory. The paper describes augmenting the classic classification problem to include the option to abstain from making a prediction when uncertainty is high.
Project # 5 Group members:
Jones, Hayden
Leung, Michael
Haque, Bushra
Mustatea, Cristian
Title: Combine Convolution with Recurrent Networks for Text Classification
Description:
Our team chose to reproduce the paper Combine Convolution with Recurrent Networks for Text Classification on Arxiv. The goal of this paper is to combine CNN and RNN architectures in a way that more flexibly combines the output of both architectures other than simple concatenation through the use of a “neural tensor layer” for the purpose of improving at the task of text classification. In particular, the paper claims that their novel architecture excels at the following types of text classification: sentiment analysis, news categorization, and topical classification. Our team plans to recreate this paper by working in pairs of 2, one pair to implement the CNN pipeline and the other pair to implement the RNN pipeline. We will be working with Tensorflow 2, Google Collab, and reproducing the paper’s experimental results with training on the same 6 publicly available datasets found in the paper.