Difference between revisions of "stat841f14"

Principal components analysis (Lecture 1: Sept. 10, 2014)

Introduction

Principal Component Analysis (PCA), first invented by Karl Pearson in 1901, is a statistical technique for data analysis. Its main purpose is to reduce the dimensionality of the data.

Suppose there is a set of data points in a d-dimensional space. The goal of PCA is to find a linear subspace with lower dimensionality $\, p$ ($p \leq d$), such that maximum variance is retained in this lower-dimensional space. The linear subspace can be specified by p orthogonal vectors, such as $\, u_1 , u_2 , ... , u_p$, which form a new coordinate system. Ideally $\, p \ll d$ (worst case would be to have $\, p = d$). These vectors are called the 'Principal Components'. In other words, PCA aims to reduce the dimensionality of the data, while preserving its information (or minimizing the loss of information). Information comes from variation. In other words, capturing more variation in the lower dimensional subspace means that more information about the original data is preserved.

For example, if all data points have the same value along one dimension (as depicted in the figures below), then that dimension does not carry any information. To preserve the original information in a lower dimensional subspace, the subspace needs to contain components (or dimensions) along which data has most of its variability. In this case, we can ignore the dimension where all data points have the same value. In practical problems, however, finding a linear subspace of lower dimensionality, where no information about the data is lost, is not possible. Thus the loss of information is inevitable. Through PCA, we try to reduce this loss and capture most of the features of data.

The figure below demonstrates an example of PCA. Data is transformed from original 3D space to 2D coordinate system where each coordinate is a principal component.

Now, consider the ability of the two of the above example components (PC1 and PC2) to retain information from the original data. The data in the original space is projected onto each of these two components separately in the figures below. Notice that PC1 is better able to capture the variation in the original data than PC2. If the goal was to reduce the original d=3 dimensional data to p=1 dimension, PC1 would be preferable to PC2.

Comparison of two different components used for dimensionality reduction

PCA Plot example

For example<ref> https://onlinecourses.science.psu.edu/stat857/sites/onlinecourses.science.psu.edu.stat857/files/lesson05/PCA_plot.gif </ref>, in the top left corner of the image below, the point $x_1$ is shown in a two-dimensional space and it's coordinates are $(x_i, 1)$ and $(x_i, 2)$

All the red points in the plot represented by their projected values on the two original coordinators (Feature 1, Feature 2).

In PCA technique, it uses new coordinators (showed in green). As coordinate system changed, the points also are shown by their new pair of coordinate values for example $(z_i, 1)$ and $(z_i, 2)$. These values are determined by original vector, and the relation between the rotated coordinates and the original ones.

In the original coordinates the two vectors $x_1$ and $x_2$ are not linearly uncorrelated. In other words, after applying PCA, if there are two principal component, and use those as the new coordinates, then it is guaranteed that the data along the two coordinates have correlation = 0

By rotating the coordinates we performed an orthonormal transform on the data. After the transform, the correlation is removed - which is the whole point of PCA.

As an example of an extreme case where all of the points lie on a straight line, which in the original coordinates system it is needed two coordinates to describe these points.

In short, after rotating the coordinates, we need only one coordinate and clearly the value on the other one is always zero. This example shows PCA's dimension reduction in an extreme case while in real world points may not fit exactly on a line, but only approximately.

PCA applications

As mentioned, PCA is a method to reduce data dimension if possible to principal components such that those PCs cover as much data variation as possible. This technique is useful in different type of applications which involve data with a huge dimension like data pre-processing, neuroscience, computer graphics, meteorology, oceanography, gene expression, economics, and finance among of all other applications.

Data usually is represented by lots of variables. In data preprocessing, PCA is a technique to select a subset of variables in order to figure our best model for data. In neuroscience, PCA used to identify the specific properties of a stimulus that increase a neuron’s probability of generating an action potential.

Figures below show some example of PCA in real world applications. Click to enlarge.

Tracking the useful features based on PCA:

Since in real world application, we need to know which features are important after dimensionality reduction. Then we can use those features as our new data set to continuous our project. To do this, we need to check the principal components have been chosen. Values of principal components or eigenvectors represent the weight of features within that specific principal component. Based on these weights we can tell the importance of features in the principal component. Then choose several most importance features from the original data instead of using all.

Mathematical details

PCA is a transformation from original (d-dimensional) space to a linear subspace with a new (p-dimensional) coordinate system. Each coordinate of this subspace is called a Principle Component. First principal component is the coordinate of this system along which the data points has the maximum variation. That is, if we project the data points along this coordinate, maximum variance of data is obtained (compared to any other vector in original space). Second principal component is the coordinate in the direction of the second greatest variance of the data, and so on.

For example, if we are mapping from 3 dimensional space to 2 dimensions, the first principal component is the 2 dimensional plane which has the maximum variance of data when each 3D data point is mapped to its nearest 2D point on the 2D plane.

Lets denote the basis of original space by $\mathbf{v_1}$, $\mathbf{v_2}$, ... , $\mathbf{v_d}$. Our goal is to find the principal components (coordinate of the linear subspace), denoted by $\mathbf{u_1}$, $\mathbf{u_2}$, ... , $\mathbf{u_p}$ in the hope that $p \leq d$. First, we would like to obtain the first principal component $\mathbf{u_1}$ or the components in the direction of maximum variance. This component can be treated as a vector in the original space and so is written as a linear combination of the basis in original space.

$\mathbf{u_1}=w_1\mathbf{v_1}+w_2\mathbf{v_2}+...+w_d\mathbf{v_d}$

Vector $\mathbf{w}$ contains the weight of each basis in this combination.

$\mathbf{w}=\begin{bmatrix} w_1\\w_2\\w_3\\...\\w_d \end{bmatrix}$

Suppose we have n data points in the original space. We represent each data points by $\mathbf{x_1}$, $\mathbf{x_2}$, ..., $\mathbf{x_n}$. Projection of each point $\mathbf{x_i}$ on the $\mathbf{u_1}$ is $\mathbf{w}^T\mathbf{x_i}$.

Let $\mathbf{S}$ be the sample covariance matrix of data points in original space. The variance of the projected data points, denoted by $\Phi$ is

$\Phi = Var(\mathbf{w}^T \mathbf{x_i}) = \mathbf{w}^T \mathbf{S} \mathbf{w}$

we would like to maximize $\Phi$ over set of all vectors $\mathbf{w}$ in original space. But, this problem is not yet well-defined, because for any choice of $\mathbf{w}$, we can increase $\Phi$ by simply multiplying $\mathbf{w}$ by a positive scalar greater than one. And the maximum will goes to infinity which is not the proper solution for our problem. So, we add the following constraint to the problem to bound the length of vector $\mathbf{w}$:

$max_w \Phi = \mathbf{w}^T \mathbf{S} \mathbf{w}$

subject to : $\mathbf{w}^T \mathbf{w} =1$

This constraint makes $\mathbf{w}$ the unit vector in d-dimensional euclidian space and as a result the optimization problem is no longer unconstrained.

Using Lagrange Multiplier technique we have:

$L(\mathbf{w} , \lambda ) = \mathbf{w}^T \mathbf{S} \mathbf{w} - \lambda (\mathbf{w}^T \mathbf{w} - 1 )$

By taking derivative of $L$ w.r.t. primary variable $\mathbf{w}$ we have:

$\frac{\partial L}{\partial \mathbf{w}}=( \mathbf{S}^T + \mathbf{S})\mathbf{w} -2\lambda\mathbf{w}= 2\mathbf{S}\mathbf{w} -2\lambda\mathbf{w}= 0$

Note that $\mathbf{S}$ is symmetric so $\mathbf{S}^T = \mathbf{S}$.

From the above equation we have:

$\mathbf{S} \mathbf{w} = \lambda\mathbf{w}$.

So $\mathbf{w}$ is the eigenvector and $\lambda$ is the eigenvalue of the matrix $\mathbf{S}$ .(Taking derivative of $L$ w.r.t. $\lambda$ just regenerate the constraint of the optimization problem.)

By multiplying both sides of the above equation to $\mathbf{w}^T$ and considering the constraint, we obtain:

$\Phi = \mathbf{w}^T \mathbf{S} \mathbf{w} = \mathbf{w}^T \lambda \mathbf{w} = \lambda \mathbf{w}^T \mathbf{w} = \lambda$

The interesting result is that objective function is equal to eigenvalue of the covariance matrix. So, to obtain the first principle component, which maximizes the objective function, we just need the eigenvector corresponding to the largest eigenvalue of $\mathbf{S}$. Subsequently, the second principal component is the eigenvector corresponding to the second largest eigenvalue, and so on.

Principal component extraction using singular value decomposition

Singular Value Decomposition (SVD), is a well-known way to decompose any kind of matrix $\mathbf{A}$ (m by n) into three useful matrices.

$\mathbf{A} = \mathbf{U}\mathbf{\Sigma}\mathbf{V}^T$.

where $\mathbf{U}$ is m by m unitary matrix, $\mathbf{UU^T} = I_m$, each column of $\mathbf{U}$ is an eigenvector of $\mathbf{AA^T}$

$\mathbf{\Sigma}$ is m by n diagonal matrix, non-zero elements of this matrix are square roots of eigenvalues of $\mathbf{AA^T}$.

$\mathbf{V}$ is n by n unitary matrix, $\mathbf{VV^T} = I_n$, each column of $\mathbf{V}$ is an eigenvector of $\mathbf{A^TA}$

Now, comparing the concepts of PCA and SVD, one may find out that we can perform PCA using SVD.

Let's construct a matrix p by n with our n data points such that each column of this matrix represent one data point in p-dimensional space:

$\mathbf{X} = [\mathbf{x_1} \mathbf{x_2} .... \mathbf{x_n} ]$

and make another matrix $\mathbf{X}^*$ simply by subtracting the mean of data points from $\mathbf{X}$.

$\mathbf{X}^* = \mathbf{X} - \mu_X[1, 1, ... , 1], \mu_X = \frac{1}{n} \sum_{i=1}^n x_i$

Then we will get a zero-mean version of our data points for which $\mathbf{X^*} \mathbf{X^{*^T}}$ is the covariance matrix.

So, $\mathbf{\Sigma}$ and $\mathbf{U}$ give us the eigenvalues and corresponding eigenvectors of covariance matrix, respectively. We can then easily extract the desired principal components.

MATLAB example

In this example we use different pictures of a man's face with different facial expressions. But, these pictures are noisy. So the face is not easily recognizable. The data set consists of 1965 pictures each 20 by 28. So dimensionality is 20*28= 560.

Our goal is to remove noise using PCA and of course by means of SVD function in matlab. We know that noise is not the main feature that makes these pictures look different, so noise is among least variance components. We first extract the principal components and then remove those which correspond to the least values of eigenvalues of covariance matrix. Here, eigenvalues are sorted in a descending order from column to column of matrix S. For example, the first column of matrix U, which corresponds to the first element in S, is the first principal component.

We then reconstruct the picture using first d principal components. If d is too large, we can not completely remove the noise. If it is too small, we will lose some information from the original data. For example we may lose the facial expression (smile, sadness and etc.). We can change d until we achieve a reasonable balance between noise reduction and information loss.

File:noisy-face.jpg
Noise reduced version of a picture in MATLAB example
 >> % loading the noisy data, file "noisy" stores our variable X which contains the pictures
>>
>>
>> % show a sample image in column 1 of matrix X
>> imagesc(reshape(X(:,1),20,28)')
>>
>> % set the color of image to grayscale
>> colormap gray
>>
>> % perform SVD, if X matrix if full rank, will obtain 560 PCs
>> [U S V] = svd(X);
>>
>> d= 10;
>>
>> % reconstruct X using only the first d principal components and removing noise
>> XX = U(:, 1:d)* S(1:d,1:d)*V(:,1:d)' ;
>>
>> % show image in column 1 of XX which is a noise reduced version
>> imagesc(reshape(XX(:,1),20,28)')


PCA continued, Lagrange multipliers, singular value decomposition (SVD) (Lecture 2: Sept. 15, 2014)

Principal component analysis (continued)

PCA is a method to reduce dimensions in data or to extract features from data.

Given a data point in vector form $\,x$ the goal of PCA is to map $\,x$ to $\,y$ where $\,x$ $\,\isin$ $\,\real$d and $\,y$ $\,\isin$ $\,\real$p such that p is much less than d. For example we could map a two-dimensional data point onto any one-dimensional vector in the original 2D space or we could map a three-dimensional data point onto any 2D plane in the original 3D space.

The transformation from d-dimensional space to p-dimensional space is chosen in such a way that maximum variance is retained in the lower dimensional subspace. This is useful because variance represents the main differences between the data points, which is exactly what we are trying to capture when we perform data visualization. For example, when taking 64-dimensional images of hand-written digits, projecting them onto 2-dimensional space using PCA captures a significant amount of the structure we care about.

In terms of data visualization, it is fairly clear why PCA is important. High dimensional data is sometimes impossible to visualize. Even a 3D space can be fairly difficult to visualize, while a 2D space, which can be printed on a piece of paper, is much easier to visualize. If a higher dimensional dataset can be reduced to only 2 dimensions, it can be easily represented by plots and graphs.

In the case of many data points $\,x_i$ in d-dimensional space:

$\,X_{dxn} = [x_1 x_2 ... x_n]$

There is an infinite amount of vectors in $\,\real$p to which the points in X can be mapped. When mapping these data points to lower dimensions information will be lost. To preserve as much information as possible the points are mapped to a vector in $\real$p which will preserve as much variation in the data as possible. In other words, the data is mapped to the vector in $\real$p that will have maximum variance. In the images below, the data points mapped onto u in Mapping 1 have greater variance than in Mapping 2, and thus preserves more information in the data.

Notice(Figure 1.) the projection of a single data point x on u is p = uTx (a scalar), thus the projection of multiple data points(Figure 2.) onto u is computed by Y = uTX, where X is the matrix representing multiple data points. The variance of Y can be calculated easily as Y is a 1 by n vector.

Expand to higher dimensional space $\,\real$q where q > 1, Covariance is used to find the variance of Y. The formula is given by: Var(uTX) = uTSu where S is the sample covariance matrix of X.

In finding the first principal component the objective is to find the direction u which will have the greatest variance of the points projected onto it. This can be expressed as the following optimization problem:

maxu uTSu
s.t. uTu = 1

The restriction uTu = 1 is to bound the problem. If the length of u were not restricted, uTSu would go to infinity as the magnitude of u goes to infinity. The selection of 1 as the bound is arbitrary.

Also from the three images above, we can see that the first principal component is the "line of best fit" that passes through the multi-dimensional mean of all data points, that minimizes the sum of distance squared from data points to the vector. Tho not illustrated but if the above example is doing dimensional reduction from R3 to R1, then the second principal component will have the same fashion as first component, with correlation between the first component and data points removed.

Lagrange multipliers

The Lagrange multiplier is a method for solving optimization problems. After constraining the principal component analysis output vector to an arbitrary size the task of maximizing variance in lower dimensional spaces becomes an optimization problem.

Lagrange multipler theorem

The method of Lagrange multipliers states that given the optimization problem:

   $max\ f(x,y)$
$s.t.\ g(x,y) = c$


The Lagrangian is defined by:

$\ L(x,y,\lambda)= f(x,y) - \lambda(g(x,y)-c)$

,where $\ \lambda$ term may be either added or subtracted.

Which implies, where $\bigtriangledown\$ is the gradient, that:

$\bigtriangledown\ f(x,y) = \lambda \bigtriangledown g(x,y)$

$\bigtriangledown\ f(x,y) - \lambda \bigtriangledown g(x,y) = 0$

In practice,

$\bigtriangledown\ L$ is calculated by computing the three partial derivatives of $\ L$ with respect to $\ x, y, \lambda$, setting them to 0, and solving the resulting system of equations.

That is to say, the solution is given by taking the solution of:

$\ \bigtriangledown L = \begin{bmatrix} \frac{\partial L}{\partial x}\\\frac{\partial L}{\partial y}\\\frac{\partial L}{\partial \lambda}\end{bmatrix}= 0$

which maximize the objective function f(x,y).

Intuitive explanation

The Lagrange multiplier utilizes gradients to find maximum values subject to constraints. A gradient $\bigtriangledown$ is a vector which collects all of the partial derivatives of a function. Intuitively, this can be thought of as a vector which contains the slopes of all of a graph's tangent lines, or the direction which each point on the graph points.

Finding a point where the gradient of f(x,y) and g(x,y) differ by a factor lambda: $\bigtriangledown f(x,y) = \lambda \bigtriangledown g(x,y)$ defines a point where our restraint and objective functions are both pointing in the same direction. This is important because if our objective function is pointing in a higher direction than g, increasing our objective function will increase our output, meaning we are not at a maximum. If the restraint is pointing in a higher direction than f, then reducing our input value will yield a higher output. The factor lambda is important because it is adjusted to ensure that the constraint is satisfied and maximized at our current location.

Lagrange multiplier example

Consider the function $\ f(x,y) = x - y$ such that $\ g(x,y) = x^2 + y^2 = 1$. We would like to maximize this objective function.

Calculating the Lagrangian gives

$\ \text{L}(x,y,\lambda) = x - y - \lambda g(x,y) = x - y - \lambda (x^2 + y^2 - 1)$

This gives three partial derivatives:

$\frac{\partial L}{\partial x} = 1 - 2\lambda x = 0, \frac{\partial L}{\partial y} = -1 - 2\lambda y = 0, \frac{\partial L}{\partial \lambda} = - (x^2 + y^2 - 1) = 0$

This gives two solutions:

1. $x = \frac{\sqrt{2}}{2}, y = \frac{-\sqrt{2}}{2}, \lambda = \frac{1}{\sqrt{2}}$
2. $x = \frac{-\sqrt{2}}{2}, y = \frac{\sqrt{2}}{2}, \lambda = \frac{1}{\sqrt{2}}$

Both solutions give extreme values of our objective function. Because this is an optimization problem we are interested in the solution which gives the maximum output, which is solution 1.

Singular value decomposition revisited

Any matrix X can be decomposed to the three matrices: $\ U \Sigma V^T$ where:

$\ U$ is an m x m matrix of the eigenvector XXT

$\ \Sigma$ is an m x n matrix containing singular values along the diagonal and zeroes elsewhere

$\ V^T$ is an orthogonal transpose of a n x n matrix of the eigenvector XTX

Connection of Singular Value Decomposition to PCA

If $\,X$ is centered (has a mean of 0) then $\,XX^T$ is the covariance matrix.

Recall that for a square, symmetric matrix $\,A$, we can have $\,A = W \Lambda W^T$ where columns of $\,W$ are eigenvectors of $\,A$, and $\,\Lambda$ is a diagonal matrix containing the eigenvalues.

Using the SVD of $\,X$ we get $\,X=U \Sigma V^T$. Then consider $\,XX^T$:

$\ XX^T = (U \Sigma V^T)(U \Sigma V^T)^T$

$\ XX^T = (U \Sigma V^T)(V \Sigma U^T)$

$\ XX^T = U \Sigma V^T V \Sigma U^T$

$\ XX^T = U \Sigma I \Sigma U^T$ (By the definition of V)

$\ XX^T = U \Sigma^2 U^T$

Thus $\,U$ is the matrix containing eigenvectors of the covariance matrix $\,XX^T$.

It is important to remember that $\,U$ and $\,V$ are unitary matricies, i.e.: $\, U^TU=UU^T=I$ and $\,V^TV=VV^T=I$, each equality using the appropriate dimensional identity matrix. It is also interesting that the nonsquare matrix formed by taking a subset of the eigenvectors contained in $\,U$ or $\,V$ has the property that $\,V^TV=I$ (the appropriate lower dimensional identity matrix).

Matlab example of dimensionality reduction using PCA

The following example shows how principal component analysis can be used to identify between a handwritten 2 and a handwritten 3. In the example the original images are each 64 pixels and thus 64 dimensional. By reducing the dimensionality to 2D the 2's and 3's are visualized on a 2D plot and can it could be estimated as to which digit one was.

 >> % load file containing 200 handwritten 2's and 200 handwritten 3's
>>
>> who?
>> % displays that X is 64x400 matrix containing the handwritten digits
>>
>> imagesc(reshape(X(:,1),8,8)')
>> colormap gray
>> % displays first image, a handwritten 2
>>
>> M = mean(X,2)*ones(1,400);
>> Xb = X-M
>> % M is the matrix of the mean of column vectors repeated 400 times
>> % Xb is data centred about mean
>>
>> % Now use SVD to find eigenvectors, eigenvalues of Xb
>> [U S V] = svd(Xb);
>>
>> % 2D projection on points
>> Y = U(:,1:2)'*X;
>>
>> % 2D projections of 2's
>> plot(Y(1,1:200),Y(2,1:200),'.')
>>
>> % 2D projections of 3's
>> plot(Y(1,201:400),Y(2,201:400),'.')


PCA Algorithm, Dual PCA, and Kernel PCA (Lecture 3: Sept. 17, 2014)

PCA Algorithm (Algorithm 1)

Recover basis
To find the basis for PCA we first center the data by removing the mean of the sample, then calculate the covariance:
$XX^T = \sum_{i=1}^n x_i x_i^T$
Let $U$ be the eigenvectors of $XX^T$ corresponding to the top $\,d$ eigenvalues.
Encode training data
To encode our data using PCA we let
$\,Y = U^TX$
where $Y$ is an encoded matrix of the original data
Reconstruct training data
To project our data back to the higher dimension
$\hat{X} = UY = UU^T X$
Encode test example
$\,y = U^T x$
where $\,y$ is an encoding of $\,x$
Reconstruct test example
$\hat{x} = Uy = UU^T x$

Dual principal component analysis

Motivation

In some cases the dimension of our data can be much larger than the number of data points $( d \gt \gt n )$. For example, suppose we are interested in weekly closing rates of the Dow Jones Industrial Average, S&P 500 Index, and the NASDAQ Composite Index over the past forty years. In this example we only have three data points $( n = 3 )$, but our dimension is over two thousand! $( d \approx 2080 )$ Consequently, the matrix $XX^T$ used to find the basis $U$ is $2080\times2080$, which is computationally heavy to calculate. As another example is studying genes for patients since the number of genes studied is likely much larger than the number of patients. When the data's dimensionality is high relative to the number of data points collected, a less compute-intensive way of recovering the basis is desirable. Calculating dual PCA is more efficient than PCA when the number of data points is lower relative to the dimensionality of the data. This is intuitive because it allows us to model our optimization problem as a decomposition of our data, rather than our dimensions.

Recall from Lecture 2 how we used singular value decomposition for PCA:
$\,X = U \Sigma V^T$
Multiply both sides by V to get:
$\,X V = U \Sigma V^T V$
Since $V^T V = I$ (the identity matrix) the SVD can be rearranged as
$\,X V = U \Sigma$
Then, multiply both sides by $\Sigma$-1 to get:
$\,U = X V \Sigma$-1
And so $U$ can be calculated in terms of $V$ and $\Sigma$. $V$ is the eigenvectors of $X^T X_{n \times n}$, so it is much easier to find than $U$ because $n\lt \lt d$.
Algorithm
We can replace all instances of $\,U$ in Algorithm 1 with the dual form, $\,U = X V \Sigma^{-1}$, to obtain the algorithm for Dual PCA.
Recover Basis:
For the basis of Dual PCA, we calculate
$\,XX^T$
then let $\,V$ be the eigenvectors of $\,XX^T$ with respect to the top d eigenvalues. Finally we let $\,\Sigma$ be the diagonal matrix of square roots of the top d eigenvalues.
Encode Training Data:
Using Dual PCA to encode the data, let
$\,Y = U^TX = \Sigma V^T$
where $Y$ is an encoded matrix of the original data
Reconstruct Training Data:
Project the data back to the higher dimension by
$\hat{X} = UY = U \Sigma V^T = X V \Sigma^{-1} \Sigma V^T = X V V^T$
Encode Test Example:
$\,y = U^T x = \Sigma^{-1} V^T X^T x = \Sigma^{-1} V^T X^T x$
where $\,y$ is an encoding of $\,x$
Reconstruct Test Example:
$\hat{x} = Uy = UU^T x = X V \Sigma^{-2} V^T X^T x = X V \Sigma^{-2} V^T X^T x$
Note that the steps of Reconstructing training data and Reconstructioning test example still depend on $d$, and therefore still will

be impractical in the case that the original dimensionality of the data ($d$) is very large

Kernel Principal Component Analysis

Kernel methods

Kernel methods are a class of algorithms used for analyzing patterns and measuring similarity. They are used for finding relations within a data set. Kernel methods map data into a higher-dimensional space to make it easier to find these relations. While PCA is designed for linear variabilities, a lot of higher-dimensional data are nonlinear. This is where Kernel PCA can be useful to help model the data variability when it comes to a nonlinear manifold. Generally speaking, Kernels is a measure of "similarity" between data points without explicitly performing the dot product between data points pairs. This can be done using different methods. For example, the dot product looks at whether vectors are orthogonal. The Gaussian kernel looks at whether two objects are similar on a scale of 0 to 1. Kernels also measure similarity between trees. We will not go in too much detail at the moment.

Some common examples of kernels include:
1. Linear kernel:
$\, k_{ij} = \lt x_{i},x_{j}\gt$
2. Polynomial kernel:
$\, k_{ij} = (1+\lt x_{i},x_{j}\gt )^P$
3. Gaussian kernel:
$\, k_{ij} = exp(||x_{i}-x_{j}||^2/2\sigma^2)$

Where $\ x_{i},x_{j} ,\isin$ $\,\real$d

The kernel matrix $\, K_{n \times n}$ is the matrix where $\, k_{ij}$ is the chosen kernel between d-dimensional points $\ x_{i}$ and $\ x_{j}$

Other kernel examples can be found here:
Kernel PCA
With the Kernel PCA, we take the original or observed space and map it to the feature space and then map the feature space to the embedded space of a lower dimension.
$X_{observed space} \rightarrow \Eta_{feature space} \rightarrow Y_{embedded space}$
In the Kernel PCA algorithm, we can use the same argument as PCA and again use the Singular Value Decomposition (SVD) where:
$\, \Phi(X) = U \Sigma V^T$
where $U$ contains the eigenvectors of $\Phi(X)\Phi(X)^T$
The goal is to reduce the dimensionality from the space X to the dimensionality of space Y by passing through H without having to know $\Phi(X)$ exactly.
The algorithm is similar to the Dual PCA algorithm except that the training and the test data cannot be reconstructed since $\Phi(X)$ is unknown.
Algorithm
Recover Basis:
For the basis of Kernel PCA, we calculate
$\, K = \Phi(X)^T \Phi(X)$
using the kernel $K$ and let $V$ be the eigenvectors of $\Phi(X)^T \Phi(X)$ with respect to the top $p$ eigenvalues. Finally we let $\Sigma$ be the diagonal matrix of square roots of the top $p$ eigenvalues.
Encode Training Data:
Using Kernel PCA to encode the data, let
$\,Y = U^T \Phi(X) = \Sigma V^T$
Recall $U = \Phi(X) V \Sigma^{-1}$:
$\,Y = U^T \Phi(X) = \Sigma^{-1} V^T \Phi(X)^T \Phi(X) = \Sigma^{-1} V^T K(X,X)$
where $Y$ is an encoded matrix of the image of the original data in the function $\Phi$. $Y$ is computable without knowledge of $\Phi(X)$ via the kernel function.
Reconstruct Training Data:
$\hat{\Phi(X)} = UY = UU^T\Phi(X)=\Phi(X)V\Sigma^{-1}\Sigma V^T=\Phi(X) V V^T$ $\hat{X} = \Phi^{-1}(\hat{\Phi(X)})$
However, $\Phi(X)$ is unknown so we cannot reconstruct the data.
Encode Test Example:
$\,y = U^T \Phi(x) = \Sigma^{-1} V^T \Phi(X)^T \Phi(x) = \Sigma^{-1} V^T K(X,x)$
This is possible because we know that $\Phi(X)^T \Phi(X) = K(X,x)$.
Recall
$U = \Phi(X) V \Sigma^{-1}$:
$\Phi(X)$ is the transformation of the training data $X_{d \times n}$ while $\Phi(x)$ is the transformation of the test example $x_{d \times 1}$

Reconstruct Test Example:
$\hat{\Phi(x)} = Uy = UU^T \Phi(x) = \Phi(X) V \Sigma^{-2} V^T \Phi(X)^T \Phi(x) = \Phi(X) V \Sigma^{-2} V^T K(X,x)$
Once again, since $\Phi(X)$ is unknown, we cannot reconstruct the data.

Centering for kernel PCA and multidimensional scaling (Lecture 4: Sept. 22, 2014)

Centering for kernel PCA

In the derivation of kernel PCA we assumed that $\sum_{i=1}^n\Phi(x_i)=0$ which is improbable for a random data set and an arbitrary mapping $\Phi$. We must be able to ensure this condition though we don't know $\Phi$.

Let $\mu=\frac{1}{n}\sum_{j=1}^n\Phi(x_j)$

Define the centered transformation (not computable)

$\tilde{\Phi}:x\mapsto(\Phi(x)-\mu)$

and the centered kernel (computable)

$\tilde{K}:(x,y)\mapsto \tilde{\Phi}(x)^T \tilde{\Phi}(y)$

Then

$\tilde{K}(x,y) = (\Phi(x)-\mu)^T (\Phi(y)-\mu)=\Phi(x)^T\Phi(y)-\mu^T\Phi(y)-\Phi(x)^T\mu+\mu^T\mu$

$=K(x,y)-\frac{1}{n}\sum_{j=1}^n\Phi(x_j)^T\Phi(y)-\frac{1}{n}\sum_{j=1}^n\Phi(x_j)^T\Phi(x)+\frac{1}{n^2}\sum_{j=1}^n\sum_{i=1}^n\Phi(x_j)^T\Phi(x_i)$
$=K(x,y)-\frac{1}{n}\sum_{j=1}^n K(x_j,x)-\frac{1}{n}\sum_{j=1}^n K(x_j,y)+\frac{1}{n^2}\sum_{j=1}^n\sum_{i=1}^n K(x_j,x_i)$
$=K(x,y)-\frac{1}{n}\bold{1}^TK(X,x)-\frac{1}{n}\bold{1}^TK(X,y)+\frac{1}{n^2}\bold{1}^TK(X,X)\bold{1}$
where $\bold{1}=([1,1,...,1]_{1\times n})^T$

Here $X_{d\times n}$ and $\{x_j\}_{j=1}^n$ denotes the data set matrix and elements respectively and $x,y$ represent test elements at which the centered kernel is evaluated.

In practice, with $\, X = [x_1 | x_2 | ... | x_n]$ as the data matrix, the Kernel Matrix, denoted by $K$, will have elements $\, [K]_{ij} = K(x_i,x_j)$.

We evaluate $\tilde{K} = K - \frac{1}{n}\bold{1}^TK-\frac{1}{n}K\bold{1}+\frac{1}{n^2}\bold{1}^TK\bold{1}$, and find the eigenvectors/eigenvalues of this matrix for Kernel PCA.

Metric Multidimensional Scaling

Multidimensional Scaling (MDS) is another dimension reduction method that attempts to preserve the pair-wise distance, $\,d_{ij}$, between data points. MDS helps with the problem of making a configuration of points in Euclidean space by using the distance information between the patterns. For example, suppose we want to map a set of points X, $x \in \mathbb{R}^d$, to a set of points Y, $y \in \mathbb{R}^p$. We want to preserve the pair-wise distances such that $\,d(x_i, x_j)$ is approximately the same as $\,d(y_i, y_j)$.

MDS example

We can map points in 2 dimensions to points in 1 dimension by projecting the points onto a straight line:

Note that the pair-wise distances are relatively preserved. However, the triangle inequality is not preserved:

$\, d(x_1,x_2)+d(x_2,x_3) \gt d(x_1, x_3)$, but $\, d(y_1,y_2)+d(y_2,y_3) = d(y_1, y_3)$

Another example of when it's crucial to preserve the distances between data points in lower dimension space (MDS) is when dealing with "distances between cities", as shown in here

Distance matrix

Given a $d x n$ matrix, $X$, the distance matrix is defined by:

$D^X_{n \times n} = \begin{bmatrix} d^{(X)}_{11} & d^{(X)}_{12} & ... & d^{(X)}_{1n}\\d^{(X)}_{21} & d^{(X)}_{22} & ... & d^{(X)}_{2n}\\...\\d^{(X)}_{n1} & d^{(X)}_{n2} & ... & d^{(X)}_{nn} \end{bmatrix}$

Where $d^{(X)}_{ij}$ is defined as the euclidean distance between points $x_i$ and $x_j$ in $\mathbb{R}^d$

Important properties:

• The matrix is symmetric
• ie $d^{(X)}_{ij} = d^{(X)}_{ji} \; \forall \; 1 \leq i,j \leq n$
• The matrix is always non-negative
• ie $d^{(X)}_{ij} \ge 0 \; \forall \; 1 \leq i,j \leq n$
• The diagonal of the matrix is always 0
• ie $d^{(X)}_{ii}= 0 \; \forall \; 1 \leq i \leq n$
• If $x_i$'s don't overlap, then all elements that are not on the diagonal are positive
• Triangle inequality holds for all distance
• Triangle inequality: $d_{ab} + d_{bc} \gt = d_{ac}$ whenever $a, b, c$ are three different indices

Example:
For:

$x_1 = (0,1)$
$x_2 = (1,0)$
$x_3 = (1,1)$

Or:

$X = \begin{bmatrix} 0 & 1 & 1\\1 & 0 & 1 \end{bmatrix}$

The distance matrix is:

$D^X_{3 \times 3} = \begin{bmatrix} 0 & \sqrt{2} & 1\\\sqrt{2} & 0 & 1\\1 & 1 & 0 \end{bmatrix}$

For any mapping to matrix $Y_{p \times n}$, we can calculate the corresponding distance matrix

$D^Y_{n \times n} = \begin{bmatrix} d^{(Y)}_{11} & d^{(Y)}_{12} & ... & d^{(Y)}_{1n}\\d^{(Y)}_{21} & d^{(Y)}_{22} & ... & d^{(Y)}_{2n}\\...\\d^{(Y)}_{n1} & d^{(Y)}_{n2} & ... & d^{(Y)}_{nn} \end{bmatrix}$

Where $d^{(Y)}_{ij}$ is defined as the euclidean distance between points $y_i$ and $y_j$ in $\mathbb{R}^p$

Note that a kernel matrix is a positive semidefinite matrix.

Finding the optimal Y

MDS minimizes the metric

$\text{min}_Y \sum_{i=1}^n{\sum_{j=1}^n{(d_{ij}^{(X)}-d_{ij}^{(Y)})^2}}$

Where $d_{ij}^{(X)} = ||x_i - x_j||$, and $d_{ij}^{(Y)} = ||y_i - y_j||$

The distance matrix $D^{(X)}$ can be converted to a kernel matrix:

$K = -\frac{1}{2}HD^{(X)}H$

Where $H = I - \frac{1}{n}ee^T$ and $e=\begin{bmatrix} 1 \\ 1 \\ ... \\ 1 \end{bmatrix}$

Here, K is a kernel matrix implies that K is positive semi-definite, so $x^TKx \geq 0$ $\forall x$

In fact, there is a one-to-one correspondence between symmetric matrices and kernel matrices

Theorem

If $D^{(X)}$ is a distance matrix and if $K=-\frac{1}{2}HD^{(X)}H$, then $D^{(X)}$ is Euclidean if and only if $K$ is Positive Semi-Definite,
where $H=I-\frac{1}{n}\bold{e}\bold{e} ^T$ and $\bold{e}=([1,1,...,1]_{1\times n})^T$

Relation between MDS and PCA

It has been shown that the goal of MDS is to obtain $Y$ by minimizing $\text{min}_Y \sum_{i=1}^n{\sum_{j=1}^n{(d_{ij}^{(X)}-d_{ij}^{(Y)})^2}}$
K is positive semi-definite so it can be written as $K = X^TX$. Thus

$\text{min}_Y \sum_{i=1}^n{\sum_{j=1}^n{(d_{ij}^{(X)}-d_{ij}^{(Y)})^2}} =\text{min}_Y(\sum_{i=1}^n\sum_{j=1}^n(x_i^Tx_j-y_i^Ty_j)^2)$

and

$\text{min}_Y(\sum_{i=1}^n\sum_{j=1}^n(x_i^Tx_j-y_i^Ty_j)^2)=\text{min}_Y(\text{Tr}((X^TX-Y^TY)^2))$

From SVD we may make the decompositions $\, X^TX=V\Lambda V^T$ and $Y^TY=Q\hat{\Lambda}Q^T$

$\text{min}_Y(\text{Tr}((X^TX-Y^TY)^2))=\text{min}_Y(\text{Tr}((V\Lambda V^T-Q\hat{\Lambda}Q^T)^2))=\text{min}_Y(\text{Tr}((\Lambda-V^TQ\hat{\Lambda}Q^TV)^2))$

Let $G=Q^TV$

$\text{min}_{G,\hat{\Lambda}}(\text{Tr}((X^TX-Y^TY)^2))=\text{min}_{G,\hat{\Lambda}}(\text{Tr}((\Lambda-G^T\hat{\Lambda}G)^2))=\text{min}_{G,\hat{\Lambda}}(\text{Tr}(\Lambda^2+G^T\hat{\Lambda}GG^T\hat{\Lambda}G-2\Lambda G^T\hat{\Lambda}G))$

$=\text{min}_{G,\hat{\Lambda}}(\text{Tr}(\Lambda^2+G^T\hat{\Lambda}^2G-2\Lambda G^T\hat{\Lambda}G))$
It can be shown that for fixed $\hat{\Lambda}$ that the minimum over $G$ occurs at $G=I$, which implies that

$\text{min}_{G,\hat{\Lambda}}(\text{Tr}((X^TX-Y^TY)^2))=\text{min}_{\hat{\Lambda}}(\text{Tr}(\Lambda^2-2\Lambda\hat{\Lambda}+\hat{\Lambda}^2)=\text{min}_{\hat{\Lambda}}(\text{Tr}((\Lambda-\hat{\Lambda})^2)$

Since $\, \Lambda$ and $\hat{\Lambda}$ are diagonal matrices containing the eigenvalues of $X^TX,\text{ and } Y^TY$ respectively, and all said eigenvalues are positive, then the min is clearly obtained when the $p$ nonzero diagonal elements in $Y^TY$ are the $p$ largest eigenvalues in $X^TX$

Therefore in Euclidean space, the solution of MDS is the same as that of PCA.

Approximation to MDS Algorithm <ref> http://www.cmap.polytechnique.fr/~peyre/cours/x2005signal/dimreduc_landmarks.pdf </ref>

The problem with the MDS algorithm is that the matrices $D^X$ and $K$ are not sparse. It is therefore expensive to compute eigen-decompositions when the number of data points is very large. To reduce the computational work required we can use Landmark Multidimensional Scaling (LMDS). LMDS is an algorithm designed to run classical MDS to embed a chosen subset of data, landmark points, and then located the remaining data points within this space given knowledge of its distance to the landmark points. The following paper has details on FastMap, MetricMap, and Landmark MDS and unifies the mathematical foundation of the three presented MDS algorithms.

ISOMAP and LLE (Lecture 5: Sept. 29, 2014)

ISOMAP

Isomap is a nonlinear dimensionality reduction method. In Multidimensional scaling we seek to find a low dimensional embedding $\,Y$ for some high dimensional data $\,X$, minimizing $\,\|D^X - D^Y\|^2$ where $\, D^Y$ is a matrix of euclidean distances of points on $\,Y$ (same with $\,D^X$). In Isomap we assume the high dimensional data in $X$ lie on a low dimensional manifold, and we'd like to replace the euclidean distances in $\, D^X$ with the distance on this manifold. We don't have information about this manifold, so we approximate it by forming the k-nearest-neighbour graph between points in the dataset. The distance on the manifold can then be approximated by the shortest path between points. This is called the geodesic distance, which is a generalization of the notion of straight line to curved spaces and is denoted by $\, D^G$. This is useful because it allows for nonlinear patterns to be accounted for in the lower dimensional space. From this point, the rest of the algorithm is the same as MDS. The approach of Isomap is capable of discovering nonlinear degrees of freedom that underlie the complex nature of the data. This algorithm efficiently computes a globally optimal solution which is guaranteed to converge asymptotically to the true underlying structure. (Further Reading: A Global Geometric Framework for Nonlinear Dimensionality Reduction, by Joshua B. Tenenbaum, Vin de Silva, John C. Langford)

The Isomap Algorithm can thus be described through the following steps:

1. Construct the Neighbourhood Graph

Determine which points are neighbours on manifold M based on distances $d_{i,j}^{(X)}$ between pairs of points i and j in the input space X. To do this, we can use one of the two methods below:

a) K-Nearest Neighbours: For each point i, select the k points with the minimum distance to point i as neighbours.
b) ${\epsilon}$-Radius Neighbourhood: For each point i, the set of neighbours is given by $\, \{ j : d_{i,j}^{ (X) } \lt \epsilon \}$

The neighbourhood relations are represented as a weighted graph G over the data points, with edges of weight $d_{i,j}^{(X)}$ between the neighbouring points.

2. Compute shortest paths

Estimate geodesic distances $d_{i,j}^{(M)}$ between all pairs of points on the manifold M by computing their shortest path distance $d_{i,j}^{(G)}$ in graph G.

3. Construct low dimensional embeddings (classical MDS)

Apply classical MDS to the matrix of graph distances $D_{G} = \{ d_{i,j}^{(G)} \}$, constructing an embedding of the data in a d-dimensional Euclidean space Y that best represents the manifolds estimated intrinsic geometry.

It is worth noting that as $D^{G}$ is not a Euclidean distance, the matrix $K = -\frac{1}{2}HD^{(G)}H$ is not guaranteed to be positive semi-definite. This is handled in the implementation [5] by considering only the positive value of the real part only, which can be considered as projecting the matrix onto the cone of nearest positive semi-definite matrix.

It is important to note that k determines "locality". Isomap captures local structure and "unfolds" the manifold.

The swiss roll manifold is a commonly discussed example of this. Note that the Euclidean distance between points A and B drastically differs from the Geodesic distance.

ISOMAP Results

A Global Geometric Framework for Nonlinear Dimensionality Reduction is the paper that proposed ISOMAP, the below figures show some images for ISOMAP results. The first image shows the different orientation of the face images dataset, as shown in the figure the x axis captures the horizontal orientation, while the y axis captures the vertical orientation and the bar captures the lightning direction. The second image shows the 2 handwritten digits and ISOMAP results in this dataset.

Locally Linear Embedding (LLE)

Locally Linear Embedding (LLE) is another technique for nonlinear dimensionality reduction that tries to find a mapping from high dimensional space to low dimensional space such that local distances between points are preserved. LLE has some advantages over Isomap, including faster optimization and better result in many problems. LLE reconstructs each point based on its k-nearest neighbour.

E.g. $x_i \approx w_1x_{1}+w_2x_{2}+...+w_kx_{k} , \sum_i w_i = 1$

The method optimizes the equation $\mathbf{x_i-w_1x_{i1}+w_2x_{i2}+...+w_kx_{ik}}$ for each $x_i \in X$ with corresponding k-th nearest neighbour set $\mathbf{\, {x_{i1},x_{i2},...,x_{ik}}}$. LLE is therefore an optimization problem for:

$\mathbf{ \text{min}_W \sum_{i=1}^n{(x_i-\sum_{j=1}^n{w_{ij}x_j)^2}}}$ such that $\mathbf{\sum_{j=1}^n w_{ij} = 1}$ and also $\mathbf{w_{ij} = 0}$ if $\mathbf{x_i}$ and $\mathbf{x_j}$ are not neighbors

Algorithm

• Find the k-nearest neighbour for each data point
• Compute $\mathbf{W_{nxn}}$ through $\mathbf{\text{min}_W \sum_{i=1}^n{(x_i-\sum_{j=1}^n{W_{ij}x_j)^2}}}$ such that $\mathbf{\sum w_{ij} = 1}$ (which means that it's convex optimization of the points) where $\mathbf{x_j}$ are the k-nearest neighbours of $\mathbf{x_i}$
• Let $\mathbf{V_i = [x_j | x_j}$ is a neighbour of $\mathbf{ x_i ]_{dxk}}$ Let $\mathbf{w(i)}$ be all k weights of point $\mathbf{x_i}$ (ie row i of W with all zeroes removed) Then:
$\mathbf{V_i*w(i) = \sum_{j=1}^n{w_{ij}x_j}}$
and hence we need to optimize
$\mathbf{\, \text{min}_{w(i)} |x_i-V_i*w(i)|^2}$ under the constraint $\mathbf{\sum_{j=1}^n{w_{ij}}=1}$
This is a separable optimization problem which can be solved for each point independently, which can be simplified to
$\mathbf{min|x_ie_k^Tw(i)-V_iw(i)|^2 }$, where $e=\begin{bmatrix} 1 \ 1 \ ... \ 1 \end{bmatrix}^T$.
Now we have
$\mathbf{\, |x_ie_k^Tw(i)-V_iw(i)|^2 =(x_ie^Tw(i)-V_iw(i))^T(x_ie^Tw(i)-V_iw(i))= w(i)^T(x_ie^T-V_i)^T(x_ie^T-V_i)w(i) }$

Let $\mathbf{\, G =(x_ie^T-V_i)^T(x_ie^T-V_i) }$. Therefore we can instead simplify
$\mathbf{ min \;w(i)^TGw(i)}$ under the constraint $\mathbf{\, w(i)^Te=1}$

• Use Lagrange Multipliers to optimize the above equation:
$\mathbf{\, L(w(i),\lambda)=w(i)^TGw(i)-\lambda(w(i)^Te-1) }$

$\mathbf{\frac{\partial L}{\partial {w(i)}}=2Gw(i)-\lambda e=0 }$ which implies $\mathbf{ Gw(i)=\frac{\lambda}{2}e}$ and hence $\mathbf{ w(i)=\frac{\lambda}{2}G^{-1}e }$
Note that we do not need to compute $\mathbf{\lambda}$, but can just compute $\mathbf{\, G^{-1}e}$, then re-scale so $\mathbf{\, w(i)^Te=1}$
• Finally, solve $\mathbf{\text{min}_Y \sum_{i=1}^n{(y_i-\sum_{j=1}^t{w_{ij}y_j)^2}}}$ where w is known and y is unknown

Notes:

$\, G_{d*d}$ is invertible under some conditions depending on $\, k$, as it must have rank $\, d$ so $\, k$ must be greater than or equal $\, d$

LLE doesn't work for out of sample case there are extensions for it to handle this as in this [6]

If we want to reduce the dimensionality to p, we need to choose the p+1 smallest eigenvectors.

Pros of Isomap:

1. Calculation is based on eigenvalue and eigenvectors, thus overall stability and optimality is maintained.
2. Only need one pre-determined parameter: k(nearest neighbours) or $\, {\epsilon}$ (neighbourhood radius)
3. Able to determine intrinsic dimension of signal from residual.

Cons of Isomap

1. It is slow - it requires comparing all possible paths between each N(N-1)/2 pairs of points and determining which is shortest. (But this can be fixed via the Nystrom approximation which we will see later.)
2. It is not obvious how to handle out of sample data

Pros of LLE:

1. LLE is capable of retain the structure and characterstic of original data.
2. LLE requires little parameters to calculate
3. Combining 1 & 2, LLE is good for trouble shooting and error dignostics, since it can retain most from original data and needs minimal amount of parameter to calculate.
4. Much faster than ISOMAP, often with comparable or better performance

Cons of LLE

1. It is not obvious how to handle out of sample data
2. There is no way of knowing if reducing the data to p dimensions is the optimal reduction.

LLE continued, introduction to maximum variance unfolding (MVU) (Lecture 6: Oct. 1, 2014)

LLE

Recall the steps of LLE can be summarized as follows:

1. For each data point, find its $k$ nearest neighbours for some fixed $k$.
2. Find reconstruction weights $W$, minimizing
$\sum_{i=1}^n ||x_i - \sum_{j=1}^n w_{ij}x_j||^2, where \sum_j w_{ij} = 1$
3. Reconstruct the low dimensional embedding with respect to these weights.
That is, find low dimensional $Y$ minimizing
$\sum_{i=1}^n||y_i - \sum_{j=1}^nw_{ij}y_j||^2$.

The previous lecture covered how to find the weights $W$. Here we will discuss how to find the low-dimensional embedding $Y$ using these weights.

First, we want to write the previous equation in a matrix form.
Consider that $Y$ is a $p*n$ matrix and $I_{:i}$ is the $i^{th}$ column ($n*1$) in the identity matrix (like Matlab notation)

Then the sample, $i$, can be represented as follows: $y_i = Y * I_{:i} = \begin{bmatrix} y_1, y_2 ,... y_i,..., y_n \end{bmatrix} \begin{bmatrix} 0 \\0 \\... \\ 1 \\ ... \\0 \end{bmatrix}$

Note that: $I_{i:} Y^T = y_i^T$ where $I_{i:}$ is the $i^{th}$ row from the identity matrix

Similarly, $\sum w_{ij} y_i = W_{i:} Y^T$

So the objective is equivalent to minimizing $\sum_{i=1}^n||I_{i:} Y^T - W_{i:}Y^T||^2 = ||IY^T - WY^T||^2$

But first, to prevent the trivial solution of the 0-matrix, we add the constraint that $\frac{1}{n}Y^TY = I$. Now, the objective can be rewritten as minimizing:

$||Y^T - WY^T||^2_F$

where the above norm is the Frobenius norm, which takes the square root of the sum of squares of all the entries in the matrix. Recall this norm can be written in terms of the trace operator:

$||Y^T - WY^T||^2_F = ||Y^T (I - W)||^2_F = Tr((Y^T (I - W)) (Y^T (I - W))^T)= Tr(Y^T(I-W)(I-W)^TY)$

By letting $\, M=(I-W)(I-W)^T$, the objective function will become

$\, \min_Y Tr(Y^T M Y)$

This minimization has a well-known solution: We can set the rows of $Y$ to be the $d$ lowest eigenvectors of $M$, where $d$ is the dimension of the reduced space.

To solve this minimization using the Lagrange multiplier, the Lagrangian is:

$L(Y, \Lambda) = Tr(Y^TMY) - Tr(\Lambda (\frac{1}{n} Y^T Y - I)))$

$\frac{d}{dy} L(Y, \Lambda) = 2MY- \Lambda \frac{2}{n} Y$

Setting this equal to zero and solving gives

$MY = \frac{1}{n} \Lambda Y$
$MY = \Lambda^* Y$

where $\Lambda = \begin{bmatrix} \lambda_1 & \ldots & 0 \\ 0 & \lambda_2 & \ldots\\ \vdots & \ddots & \vdots\\ 0 &\ldots & \lambda_p \end{bmatrix}$

If we want to reduce the dimensions to p, we need to choose p + 1 smallest eigenvectors. This is because the first eigenvalue of the matrix is equal to 1 since it is a Laplacian (the number of $0$ eigenvalues of a Laplacian is the number of connected components).

Note that for LLE, the bottom d+1 eigenvectors of the matrix M represents a free translation mode of eigenvalue 0. By discarding this eigenvector, we enforce the constraint of $\sum_{i=1}^n = y_i=0$ since it means that the components of the other eigenvectors must sum to 0, because of orthogonality. The remaining d eigenvectors form the d embedding coordinates found by LLE.

Comparison between Dimensionality Reduction Approaches (Unified Framework)

Approach Solution \ Eigenvectors of
PCA $\boldsymbol{ X^T X }$ or $\boldsymbol{ \Sigma V^T }$
MDS $\boldsymbol{ - \frac{1}{2} (I - \frac{1}{n} e e^T) D (I - \frac{1}{n} e e^T) }$ a.k.a. $\boldsymbol{-\frac{1}{2} H D H}$ where $\boldsymbol{H=(I - \frac{1}{n} e e^T)}$
ISOMAP $\boldsymbol{ - \frac{1}{2} (I - \frac{1}{n} e e^T) D^G (I - \frac{1}{n} e e^T) }$
LLE [math]\boldsymbol{ (I -