proposal for STAT946 (Deep Learning) final projects Fall 2015: Difference between revisions

From statwiki
Jump to navigation Jump to search
(Project 7)
No edit summary
Line 18: Line 18:


'''Project 2:'''
'''Project 2:'''
'''Group members:''' Tim Tse
'''Title:''' Sleep Stage Classification with Noisy Labels 
''' Description:''' This project is a an idea that my supervisor recommended me to try. Here, we wish to design some sort of learning algorithm (presumably a neural network) to classify one of five sleep stages that one is in from their EEG signal. We have also been playing around with the idea that we can use an unsupervised learning algorithm to learn features and hence, eliminate the need for hand-tailored features. At the same time, we are thinking that we can possibly leverage crowdsourcing to create training cases. The data with inevitably be quite noisy so we also wish to factor noise management into the design.
'''Project 3:'''


'''Group members:''' Xinran Liu, Fatemeh Karimi, Deepak Rishi & Chris Choi
'''Group members:''' Xinran Liu, Fatemeh Karimi, Deepak Rishi & Chris Choi
Line 34: Line 25:
''' Description:''' Our aim is to participate in the Digital Recognizer Kaggle Challenge, where one has to correctly classify the Modified National Institute of Standards and Technology (MNIST) dataset of handwritten numerical digits. For our first approach we propose using a simple Feed-Forward Neural Network to form a baseline for comparison. We then plan on experimenting on different aspects of a Neural Network such as network architecture, activation functions and incorporate a wide variety of training methods.
''' Description:''' Our aim is to participate in the Digital Recognizer Kaggle Challenge, where one has to correctly classify the Modified National Institute of Standards and Technology (MNIST) dataset of handwritten numerical digits. For our first approach we propose using a simple Feed-Forward Neural Network to form a baseline for comparison. We then plan on experimenting on different aspects of a Neural Network such as network architecture, activation functions and incorporate a wide variety of training methods.


'''Project 4'''
'''Project 3'''


'''Group members:''' Ri Wang, Maysum Panju
'''Group members:''' Ri Wang, Maysum Panju
Line 43: Line 34:
Our data will mainly be from [http://www.statmt.org/europarl/ Europarl] and [https://tatoeba.org/eng Tatoeba]. The common target language will be English to allow for easier judgement of translation quality.
Our data will mainly be from [http://www.statmt.org/europarl/ Europarl] and [https://tatoeba.org/eng Tatoeba]. The common target language will be English to allow for easier judgement of translation quality.


'''Project 5'''
'''Project 4'''


'''Group members:''' Peter Blouw, Jan Gosmann
'''Group members:''' Peter Blouw, Jan Gosmann
Line 51: Line 42:
'''Description:''' Memory networks are machine learning systems that combine memory and inference to perform tasks that involve sophisticated reasoning (see [http://arxiv.org/pdf/1410.3916.pdf here] and [http://arxiv.org/pdf/1502.05698v7.pdf here]). Our goal in this project is to first implement a memory network that replicates prior performance on the bAbl question-answering tasks described in [http://arxiv.org/pdf/1502.05698v7.pdf Weston et al. (2015)]. Then, we hope to improve upon this baseline performance by using more sophisticated representations of the sentences that encode questions being posed to the network. Current implementations often use a bag of words encoding, which throws out important syntactic information that is relevant to determining what a particular question is asking. As such, we will explore the use of things like POS tags, n-gram information, and parse trees to augment memory network performance.
'''Description:''' Memory networks are machine learning systems that combine memory and inference to perform tasks that involve sophisticated reasoning (see [http://arxiv.org/pdf/1410.3916.pdf here] and [http://arxiv.org/pdf/1502.05698v7.pdf here]). Our goal in this project is to first implement a memory network that replicates prior performance on the bAbl question-answering tasks described in [http://arxiv.org/pdf/1502.05698v7.pdf Weston et al. (2015)]. Then, we hope to improve upon this baseline performance by using more sophisticated representations of the sentences that encode questions being posed to the network. Current implementations often use a bag of words encoding, which throws out important syntactic information that is relevant to determining what a particular question is asking. As such, we will explore the use of things like POS tags, n-gram information, and parse trees to augment memory network performance.


'''Project 6'''
'''Project 5'''


'''Group members:''' Anthony Caterini
'''Group members:''' Anthony Caterini
Line 59: Line 50:
'''Description:''' The goal of this project is to create an artificial intelligence model that can answer multiple-choice questions on a grade 8 science exam, with a success rate better than the best 8th graders. This will involve a deep neural network as the underlying model, to help parse the large amount of information needed to answer these questions. The model should also learn, over time, how to make better answers by acquiring more and more data. This is a Kaggle challenge, and the link to the challenge is [https://www.kaggle.com/c/the-allen-ai-science-challenge here]. The data to produce the model will come from the Kaggle website.
'''Description:''' The goal of this project is to create an artificial intelligence model that can answer multiple-choice questions on a grade 8 science exam, with a success rate better than the best 8th graders. This will involve a deep neural network as the underlying model, to help parse the large amount of information needed to answer these questions. The model should also learn, over time, how to make better answers by acquiring more and more data. This is a Kaggle challenge, and the link to the challenge is [https://www.kaggle.com/c/the-allen-ai-science-challenge here]. The data to produce the model will come from the Kaggle website.


'''Project 7'''  
'''Project 6'''  


'''Group members:''' Valerie Platsko
'''Group members:''' Valerie Platsko

Revision as of 13:12, 19 October 2015

Project 0: (This is just an example)

Group members:first name family name, first name family name, first name family name

Title: Sentiment Analysis on Movie Reviews

Description: The idea and data for this project is taken from http://www.kaggle.com/c/sentiment-analysis-on-movie-reviews. Sentiment analysis is the problem of determining whether a given string contains positive or negative sentiment. For example, “A series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story” contains negative sentiment, but it is not immediately clear which parts of the sentence make it so. This competition seeks to implement machine learning algorithms that can determine the sentiment of a movie review

Project 1:

Group members: Sean Aubin, Brent Komer

Title: Convolution Neural Networks in SLAM

Description: We will try to replicate the results reported in Convolutional Neural Networks-based Place Recognition using Caffe and Google-net. As a "stretch" goal, we will try to convert the CNN to a spiking neural network (a technique created by Eric Hunsberger) for greater biological plausibility and easier integration with other cognitive systems using Nengo. This work will help Brent with starting his PHD investigating cognitive localisation systems and object manipulation.

Project 2:

Group members: Xinran Liu, Fatemeh Karimi, Deepak Rishi & Chris Choi

Title: Image Classification with Deep Learning

Description: Our aim is to participate in the Digital Recognizer Kaggle Challenge, where one has to correctly classify the Modified National Institute of Standards and Technology (MNIST) dataset of handwritten numerical digits. For our first approach we propose using a simple Feed-Forward Neural Network to form a baseline for comparison. We then plan on experimenting on different aspects of a Neural Network such as network architecture, activation functions and incorporate a wide variety of training methods.

Project 3

Group members: Ri Wang, Maysum Panju

Title: Machine Translation Using Neural Networks

Description: The goal of this project is to translate languages using different types of neural networks and the algorithms described in "Sequence to sequence learning with neural networks." and "Neural machine translation by jointly learning to align and translate". Different vector representations for input sentences (word frequency, Word2Vec, etc) will be used and all combinations of algorithms will be ranked in terms of accuracy. Our data will mainly be from Europarl and Tatoeba. The common target language will be English to allow for easier judgement of translation quality.

Project 4

Group members: Peter Blouw, Jan Gosmann

Title: Using Structured Representations in Memory Networks to Perform Question Answering

Description: Memory networks are machine learning systems that combine memory and inference to perform tasks that involve sophisticated reasoning (see here and here). Our goal in this project is to first implement a memory network that replicates prior performance on the bAbl question-answering tasks described in Weston et al. (2015). Then, we hope to improve upon this baseline performance by using more sophisticated representations of the sentences that encode questions being posed to the network. Current implementations often use a bag of words encoding, which throws out important syntactic information that is relevant to determining what a particular question is asking. As such, we will explore the use of things like POS tags, n-gram information, and parse trees to augment memory network performance.

Project 5

Group members: Anthony Caterini

Title: The Allen AI Science Challenge

Description: The goal of this project is to create an artificial intelligence model that can answer multiple-choice questions on a grade 8 science exam, with a success rate better than the best 8th graders. This will involve a deep neural network as the underlying model, to help parse the large amount of information needed to answer these questions. The model should also learn, over time, how to make better answers by acquiring more and more data. This is a Kaggle challenge, and the link to the challenge is here. The data to produce the model will come from the Kaggle website.

Project 6

Group members: Valerie Platsko

Title: Classification for P300-Speller Using Convolutional Neural Networks

Description: The goal of this project is to replicate (and possibly extend) the results in Convolutional Neural Networks for P300 Detection with Application to Brain-Computer Interfaces, which used convolutional neural networks to recognize P300 responses in recorded EEG and additionally to correctly recognize attended targets.(In the P300-Speller application, letters flash in rows and columns, so a single P300 response is associated with multiple potential targets.) The data in the paper came from http://www.bbci.de/competition/iii/ (dataset II), and there is an additional P300 Speller dataset available from a previous version of the competition.