proposal for STAT946 projects Fall 2010: Difference between revisions

From statwiki
Jump to navigation Jump to search
No edit summary
Line 32: Line 32:
</noinclude>
</noinclude>
===By: Manda Winlaw ===
===By: Manda Winlaw ===


One drawback of using Local Linear Embedding (LLE) for dimensionality reduction is that it does not perform well if there are outliers or noise in the data.  To overcome this limitation one possible approach is to require sparsity in the vector of reconstruction weights.  The reconstruction weights for each data point <math>\displaystyle\overline{X}_i</math> are chosen to minimize the reconstruction errors measured by the following cost function,
One drawback of using Local Linear Embedding (LLE) for dimensionality reduction is that it does not perform well if there are outliers or noise in the data.  To overcome this limitation one possible approach is to require sparsity in the vector of reconstruction weights.  The reconstruction weights for each data point <math>\displaystyle\overline{X}_i</math> are chosen to minimize the reconstruction errors measured by the following cost function,
Line 52: Line 51:
where <math>\displaystyle{c_1}</math> is a constant and <math>P(\overline{W}_i)</math> is a convex penalty function of <math>\overline{W}_i</math> which can take different forms.   
where <math>\displaystyle{c_1}</math> is a constant and <math>P(\overline{W}_i)</math> is a convex penalty function of <math>\overline{W}_i</math> which can take different forms.   


Of particular interest, will be <math>P(\overline{W}_i) = \left\| \overline{W}_i \right\|^2_2</math> in which case the optimization problem is equivalent to ridge regression and the lasso penalty function, <math>P(\overline{W}_i) = \left\| \overline{W}_i \right\|^2_1</math>.  As noted in <ref>Tib</ref>, the lasso penalty function is a more appropriate penalty function for controlling sparsity than the <math>L_2</math>-norm penalty function since the <math>L_2</math>-norm penalty function does not actually give sparse solutions rather the solutions are scaled to give non-sparse low values.  However, using the <math>L_2</math>-norm penalty function results in a closed form solution for the weights.  There is no closed form solution for the lasso penalty function which implies we must use a numerical optimization algorithm to solve this problem increasing the computational complexity of the LLE algorithm.  My goal is examine both of these penalty functions as a means of improving the performance of the LLE algorithm for noisy data and to compare their performance in terms of accuracy and computational complexity.
Of particular interest, will be <math>P(\overline{W}_i) = \left\| \overline{W}_i \right\|^2_2</math> in which case the optimization problem is equivalent to ridge regression and the lasso penalty function, <math>P(\overline{W}_i) = \left\| \overline{W}_i \right\|^2_1</math>.  As noted in <ref>R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society
<cite>R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society
(Series B), 58:267–288, 1996.</ref>, the lasso penalty function is a more appropriate penalty function for controlling sparsity than the <math>L_2</math>-norm penalty function since the <math>L_2</math>-norm penalty function does not actually give sparse solutions rather the solutions are scaled to give non-sparse low values.  However, using the <math>L_2</math>-norm penalty function results in a closed form solution for the weights.  There is no closed form solution for the lasso penalty function which implies we must use a numerical optimization algorithm to solve this problem increasing the computational complexity of the LLE algorithm.  My goal is examine both of these penalty functions as a means of improving the performance of the LLE algorithm for noisy data and to compare their performance in terms of accuracy and computational complexity.
(Series B), 58:267–288, 1996.<\cite>
 
==References==
<references />

Revision as of 20:52, 28 October 2010

Project 1 : Sampling Landmarks Using a Convergence Approach

By: Yongpeng Sun


Intuition:

When we have a large number of data points and we wish to sample a small portion of them as landmarks, using simple random sampling (SRS) does not guarantee good results. The reason is that, when the sample is very small, SRS sometimes results in a sample that is a small cluster within the data space that would not be representative of all of the data. We would like our landmarks to be as representative of the data set as possible. Thus, we can, starting from the center of the data space, alternatively obtain landmarks that are as near as possible to the center and as far away as possible from the center, effectively removing each newly-sampled landmark by shrinking the distances matrix. During these steps, the closest point from the center and the furthest point from the center from the remaining data space gradually converge towards each other. This is because, during the steps, the distance between the nearest point from the center and the center will gradually increase while at the same time the distance between the furthest point from the center and the center will gradually decrease as the nearest point from the center and the furthest point from the center are effectively removed from the data space.

Before starting the procedure, we require the distances matrix containing the pair-wise distances between the data points. This matrix is constructed only once, and it is updated two times during each step.


The procedure for obtaining the landmarks can be described by the following steps:

Step 1: Find the point closet to the center, and add this point to the to-do queue and the landmarks. Update the distances matrix by removing the row and the column regarding this point.
Step 2: For the front-most point in the to-do queue, find the closest point and the furthest point from it, and add these two points to the to-do queue and the landmarks. Update the distances matrix by removing the row and the column regarding each of these two points. Remove the front-most point from the to-do-queue.
Step 3: If the number of landmarks suffices, then stop. Otherwise, go back to step 2.


Using 27-dimensional phoneme data, I would like to compare the efficiency of my method of sampling landmarks against the most common approach of sampling landmarks, which is by using simple random sampling (SRS).



( NOTE: STILL BEING EDITED )


Project 2 : Sparse LLE

By: Manda Winlaw

One drawback of using Local Linear Embedding (LLE) for dimensionality reduction is that it does not perform well if there are outliers or noise in the data. To overcome this limitation one possible approach is to require sparsity in the vector of reconstruction weights. The reconstruction weights for each data point [math]\displaystyle{ \displaystyle\overline{X}_i }[/math] are chosen to minimize the reconstruction errors measured by the following cost function,

[math]\displaystyle{ \displaystyle{E(W) = \sum_i |\overline{X}_i - \sum_j^{K} W_{ij}\overline{X}_j|^2} }[/math]

where the [math]\displaystyle{ \displaystyle\overline{X}_j, j = 1, \ldots, K }[/math] are the [math]\displaystyle{ \displaystyle{K} }[/math] nearest neighbors of [math]\displaystyle{ \displaystyle\overline{X}_i }[/math] and the weights, [math]\displaystyle{ \displaystyle{W_{ij}} }[/math], must satisfy [math]\displaystyle{ \displaystyle\sum_{j} W_{ij} = 1 }[/math].

From this cost function we can see that if some of the weights are zero due to a sparsity constraint then outliers may be eliminated from the reconstruction.

To impose the sparsity constraint we introduce an additional constraint into the optimization problem for each data point to get,

[math]\displaystyle{ \displaystyle{\min_{\overline{W}_i} |X_i - \sum_j W_{ij}X_j|^2} }[/math] such that [math]\displaystyle{ \; \sum_j W_{ij} = 1, \; P(\overline{W}_i) \leq c_1, }[/math]

where [math]\displaystyle{ \displaystyle{c_1} }[/math] is a constant and [math]\displaystyle{ P(\overline{W}_i) }[/math] is a convex penalty function of [math]\displaystyle{ \overline{W}_i }[/math] which can take different forms.

Of particular interest, will be [math]\displaystyle{ P(\overline{W}_i) = \left\| \overline{W}_i \right\|^2_2 }[/math] in which case the optimization problem is equivalent to ridge regression and the lasso penalty function, [math]\displaystyle{ P(\overline{W}_i) = \left\| \overline{W}_i \right\|^2_1 }[/math]. As noted in <ref>R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society (Series B), 58:267–288, 1996.</ref>, the lasso penalty function is a more appropriate penalty function for controlling sparsity than the [math]\displaystyle{ L_2 }[/math]-norm penalty function since the [math]\displaystyle{ L_2 }[/math]-norm penalty function does not actually give sparse solutions rather the solutions are scaled to give non-sparse low values. However, using the [math]\displaystyle{ L_2 }[/math]-norm penalty function results in a closed form solution for the weights. There is no closed form solution for the lasso penalty function which implies we must use a numerical optimization algorithm to solve this problem increasing the computational complexity of the LLE algorithm. My goal is examine both of these penalty functions as a means of improving the performance of the LLE algorithm for noisy data and to compare their performance in terms of accuracy and computational complexity.

References

<references />