proposal Fall 2010: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 3: Line 3:
===By: Yongpeng Sun===
===By: Yongpeng Sun===


<br>
<br>
<br>
Intuition:  
Intuition:  

Revision as of 17:39, 27 October 2010

Project 1 : Classifying New Data Points Using An Outlier Approach

By: Yongpeng Sun


Intuition:

In LDA, we assign a new data point to the class having the least distance to the center. At the same time however, it is desirable to assign a new data point to a class so that it is less of an outlier in that class as compared to every other class. To this end, compared to every other class, a new data point should be closer to the center of its assigned class and at the same time also, after suitable weighting has been done, be closer to the directions of variation of its assigned class.


Suppose there are two classes 0 and 1 both having [math]\displaystyle{ \,d }[/math] dimensions, and a new data point is given. To assign the new data point to a class, we can proceed using the following steps:

Step 1: For each class, find its center and its [math]\displaystyle{ \,d }[/math] directions of variation.


Step 2: For the new data point, with regard to each of the two classes, sum up the point's distance to the center and the point's distance to each of the [math]\displaystyle{ \,d }[/math] directions of variation weighted (multiplied) by the ratio of the amount of variation in that direction to the total variation in that class.


Step 3: Assign the new point to the class having the smaller of these two sums.


These 3 steps can be easily generalized to the case where the number of classes is more than 2 because, to assign a new data point to a class, we only need to know, with regard to each class, the sum as described above.


I would like to evaluate the effectiveness of my idea / algorithm as compared to LDA and QDA and other classifiers using data sets in the UCI database ( http://archive.ics.uci.edu/ml/ ).

Project 2: Apply Hadoop Map-Reduce to a Classification Method

By: Maia Hariri, Trevor Sabourin, and Johann Setiawan

Develop map-reduce processes that can properly classify large distributed data sets.

Potential projects:

1. Use Hadoop Map-Reduce to implement the Support Vector Machine (Kernel) classification algorithm.
2. Use Hadoop Map-Reduce to implement the LDA classification algorithm on a novel problem (e.g. forensic identification of handwriting.)