proposal for STAT946 (Deep Learning) final projects Fall 2015

From statwiki
Revision as of 17:46, 28 November 2015 by Amirlk (talk | contribs)
Jump to navigation Jump to search

Project 0: (This is just an example)

Group members:first name family name, first name family name, first name family name

Title: Sentiment Analysis on Movie Reviews

Description: The idea and data for this project is taken from http://www.kaggle.com/c/sentiment-analysis-on-movie-reviews. Sentiment analysis is the problem of determining whether a given string contains positive or negative sentiment. For example, “A series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story” contains negative sentiment, but it is not immediately clear which parts of the sentence make it so. This competition seeks to implement machine learning algorithms that can determine the sentiment of a movie review

Project 1:

Group members: Sean Aubin, Brent Komer

Title: Convolution Neural Networks in SLAM

Description: We will try to replicate the results reported in Convolutional Neural Networks-based Place Recognition using Caffe and Google-net. As a "stretch" goal, we will try to convert the CNN to a spiking neural network (a technique created by Eric Hunsberger) for greater biological plausibility and easier integration with other cognitive systems using Nengo. This work will help Brent with starting his PHD investigating cognitive localisation systems and object manipulation.

Project 2:

Group members: Xinran Liu, Fatemeh Karimi, Deepak Rishi & Chris Choi

Title: Image Classification with Deep Learning

Description: Our aim is to participate in the Digital Recognizer Kaggle Challenge, where one has to correctly classify the Modified National Institute of Standards and Technology (MNIST) dataset of handwritten numerical digits. For our first approach we propose using a simple Feed-Forward Neural Network to form a baseline for comparison. We then plan on experimenting on different aspects of a Neural Network such as network architecture, activation functions and incorporate a wide variety of training methods.

Project 3

Group members: Ri Wang, Maysum Panju, Mahmood Gohari

Title: Machine Translation Using Neural Networks

Description: The goal of this project is to translate languages using different types of neural networks and the algorithms described in "Sequence to sequence learning with neural networks." and "Neural machine translation by jointly learning to align and translate". Different vector representations for input sentences (word frequency, Word2Vec, etc) will be used and all combinations of algorithms will be ranked in terms of accuracy. Our data will mainly be from Europarl and Tatoeba. The common target language will be English to allow for easier judgement of translation quality.

Project 4

Group members: Peter Blouw, Jan Gosmann

Title: Using Structured Representations in Memory Networks to Perform Question Answering

Description: Memory networks are machine learning systems that combine memory and inference to perform tasks that involve sophisticated reasoning (see here and here). Our goal in this project is to first implement a memory network that replicates prior performance on the bAbl question-answering tasks described in Weston et al. (2015). Then, we hope to improve upon this baseline performance by using more sophisticated representations of the sentences that encode questions being posed to the network. Current implementations often use a bag of words encoding, which throws out important syntactic information that is relevant to determining what a particular question is asking. As such, we will explore the use of things like POS tags, n-gram information, and parse trees to augment memory network performance.

Project 5

Group members: Anthony Caterini, Tim Tse

Title: The Allen AI Science Challenge

Description: The goal of this project is to create an artificial intelligence model that can answer multiple-choice questions on a grade 8 science exam, with a success rate better than the best 8th graders. This will involve a deep neural network as the underlying model, to help parse the large amount of information needed to answer these questions. The model should also learn, over time, how to make better answers by acquiring more and more data. This is a Kaggle challenge, and the link to the challenge is here. The data to produce the model will come from the Kaggle website.

Project 6

Group members: Valerie Platsko

Title: Classification for P300-Speller Using Convolutional Neural Networks

Description: The goal of this project is to replicate (and possibly extend) the results in Convolutional Neural Networks for P300 Detection with Application to Brain-Computer Interfaces, which used convolutional neural networks to recognize P300 responses in recorded EEG and additionally to correctly recognize attended targets.(In the P300-Speller application, letters flash in rows and columns, so a single P300 response is associated with multiple potential targets.) The data in the paper came from http://www.bbci.de/competition/iii/ (dataset II), and there is an additional P300 Speller dataset available from a previous version of the competition.

Project 7

Group members: Amirreza Lashkari, Derek Latremouille, Rui Qiao and Luyao Ruan

Title: Digit Recognizer

Description: The goal in this competition is to take an image of a handwritten single digit, and determine what that digit is. To do so, a deep neural network will be applied in order to extract features and classify images. The data for this competition were taken from the MNIST dataset. The MNIST ("Modified National Institute of Standards and Technology") dataset is a classic within the Machine Learning community that has been extensively studied. This is a Kaggle challenge (see here).

Project 8

Group members: Abdullah Rashwan and Priyank Jaini

Title: Learning the Parameters for Continuous Distribution Sum-Product Networks using Bayesian Moment Matching

Description: Sum-Product Networks have generated interest due to their ability to do exact inference in linear time with respect to the size of the network. Parameter learning however still is a problem. We have proposed an online Bayesian Moment Matching algorithm to learn the parameters for discrete distributions, in this work, we are extending the algorithm to learn the parameters for continuous distributions as well.