# stat841f14

Jump to: navigation, search

## Principal components analysis (Lecture 1: Sept. 10, 2014)

### Introduction

Principal Component Analysis (PCA), first invented by Karl Pearson in 1901, is a statistical technique for data analysis. Its main purpose is to reduce the dimensionality of the data.

Suppose there is a set of data points in a d-dimensional space. The goal of PCA is to find a linear subspace with lower dimensionality p (p $\leq$ d), such that maximum variance is retained in this lower-dimensional space. The linear subspace can be specified by d orthogonal vectors, such as $u_1 , u_2 , ... , u_d$ which form a new coordinate system, called 'principle components'. In other words, PCA aims to reduce the dimensionality of the data, while preserving its information (or minimizing the loss of information). Information comes from variation. In other words, the more variation captured in the lower dimension, the more information preserved. For example, if all data points have the same value along one dimension, then that dimension does not carry any information. So, to preserve information, the subspace need to contain components (or dimensions) along which, data has most of its variability. However, finding a linear subspace with lower dimensionality which include all data points is not possible in practical problems and loss of information is inevitable. Through PCA, we try to reduce this loss and capture most of the features of data.

Figure below, demonstrates an example of PCA. Data is transformed from original 3D space to 2D coordinate system where each coordinate is a principal component.

Now, consider the ability of two of the above examples' components (PC1 and PC2) to retain information from the original data. The data in the original space is projected onto each of these two components separately in the below figure. Notice that PC1 is better able to capture the variation in the original data than PC2 alone. If reducing the original d=3 dimensional data to p=1 dimension, PC1 is therefore preferable to PC2.

Comparison of two different components used for dimensionality reduction

### PCA Plot example

For example<ref> https://onlinecourses.science.psu.edu/stat857/sites/onlinecourses.science.psu.edu.stat857/files/lesson05/PCA_plot.gif </ref>, in the top left corner of the image below, the point $x_1$ showed in two-dimensional and it's coordinates are $(x_i, 1)$ and $(x_i, 2)$

All the red points in the plot represented by their projected values on the two original coordinators (Feature 1, Feature 2).

In PCA technique, it uses new coordinators (showed in green). As coordinate system changed, the points also are shown by their new pair of coordinate values for example $(z_i, 1)$ and $(z_i, 2)$. These values are determined by original vector, and the relation between the rotated coordinates and the original ones.

In the original coordinates the two vectors $x_1$ and $x_2$ are not linearly uncorrelated. In other words, after applying PCA, if there are two principal component, and use those as the new coordinates, then it is guaranteed that the data along the two coordinates have correlation = 0

By rotating the coordinates we performed an orthonormal transform on the data. After the transform, the correlation is removed - which is the whole point of PCA.

As an example of an extreme case where all of the points lie on a straight line, which in the original coordinates system it is needed two coordinates to describe these points.

In short, after rotating the coordinates, we need only one coordinate and clearly the value on the other one is always zero. This example shows PCA's dimension reduction in an extreme case while in real world points may not fit exactly on a line, but only approximately.

### PCA applications

As mentioned, PCA is a method to reduce data dimension if possible to principal components such that those PCs cover as much data variation as possible. This technique is useful in different type of applications which involve data with a huge dimension like data pre-processing, neuroscience, computer graphics, meteorology, oceanography, gene expression, economics, and finance among of all other applications.

Data usually is represented by lots of variables. In data preprocessing, PCA is a technique to select a subset of variables in order to figure our best model for data. In neuroscience, PCA used to identify the specific properties of a stimulus that increase a neuron’s probability of generating an action potential.

Figures below show some example of PCA in real world applications. Click to enlarge.

### Mathematical details

PCA is a transformation from original space to a linear subspace with a new coordinate system. Each coordinate of this subspace is called a Principle Component. First principal component is the coordinate of this system along which the data points has the maximum variation. That is, if we project the data points along this coordinate, maximum variance of data is obtained (compared to any other vector in original space). Second principal component is the coordinate in the direction of the second greatest variance of the data, and so on.

Lets denote the basis of original space by $\mathbf{v_1}$, $\mathbf{v_2}$, ... , $\mathbf{v_d}$. Our goal is to find the principal components (coordinate of the linear subspace), denoted by $\mathbf{u_1}$, $\mathbf{u_2}$, ... , $\mathbf{u_p}$ in the hope that $p \leq d$. First, we would like to obtain the first principal component $\mathbf{u_1}$ or the components in the direction of maximum variance. This component can be treated as a vector in the original space and so is written as a linear combination of the basis in original space.

$\mathbf{u_1}=w_1\mathbf{v_1}+w_2\mathbf{v_2}+...+w_d\mathbf{v_d}$

Vector $\mathbf{w}$ contains the weight of each basis in this combination.

$\mathbf{w}=\begin{bmatrix} w_1\\w_2\\w_3\\...\\w_d \end{bmatrix}$

Suppose we have n data points in the original space. We represent each data points by $\mathbf{x_1}$, $\mathbf{x_2}$, ..., $\mathbf{x_n}$. Projection of each point $\mathbf{x_i}$ on the $\mathbf{u_1}$ is $\mathbf{w}^T\mathbf{x_i}$.

Let $\mathbf{S}$ be the sample covariance matrix of data points in original space. The variance of the projected data points, denoted by $\Phi$ is

$\Phi = Var(\mathbf{w}^T \mathbf{x_i}) = \mathbf{w}^T \mathbf{S} \mathbf{w}$

we would like to maximize $\Phi$ over set of all vectors $\mathbf{w}$ in original space. But, this problem is not yet well-defined, because for any choice of $\mathbf{w}$, we can increase $\Phi$ by simply multiplying $\mathbf{w}$ in a positive scalar greater than one. So, we add the following constraint to the problem to bound the length of vector $\mathbf{w}$:

$max \Phi = \mathbf{w}^T \mathbf{S} \mathbf{w}$

subject to : $\mathbf{w}^T \mathbf{w} =1$

Using Lagrange Multiplier technique we have:

$L(\mathbf{w} , \lambda ) = \mathbf{w}^T \mathbf{S} \mathbf{w} - \lambda (\mathbf{w}^T \mathbf{w} - 1 )$

By taking derivative of $L$ w.r.t. primary variable $\mathbf{w}$ we have:

$\frac{\partial L}{\partial \mathbf{w}}=( \mathbf{S}^T + \mathbf{S})\mathbf{w} -2\lambda\mathbf{w}= 2\mathbf{S}\mathbf{w} -2\lambda\mathbf{w}= 0$

Note that $\mathbf{S}$ is symmetric so $\mathbf{S}^T = \mathbf{S}$.

From the above equation we have:

$\mathbf{S} \mathbf{w} = \lambda\mathbf{w}$.

So $\mathbf{w}$ is the eigenvector and $\lambda$ is the eigenvalue of the matrix $\mathbf{S}$ .(Taking derivative of $L$ w.r.t. $\lambda$ just regenerate the constraint of the optimization problem.)

By multiplying both sides of the above equation to $\mathbf{w}^T$ and considering the constraint, we obtain:

$\Phi = \mathbf{w}^T \mathbf{S} \mathbf{w} = \mathbf{w}^T \lambda \mathbf{w} = \lambda \mathbf{w}^T \mathbf{w} = \lambda$

The interesting result is that objective function is equal to eigenvalue of the covariance matrix. So, to obtain the first principle component, which maximizes the objective function, we just need the eigenvector corresponding to the largest eigenvalue of $\mathbf{S}$. Subsequently, the second principal component is the eigenvector corresponding to the second largest eigenvalue, and so on.

### Principal component extraction using singular value decomposition

Singular Value Decomposition (SVD), is a well-known way to decompose any kind of matrix $\mathbf{A}$ (m by n) into three useful matrices.

$\mathbf{A} = \mathbf{U}\mathbf{\Sigma}\mathbf{V}^T$.

where $\mathbf{U}$ is m by m unitary matrix, $\mathbf{UU^T} = I_m$, each column of $\mathbf{U}$ is an eigenvector of $\mathbf{AA^T}$

$\mathbf{\Sigma}$ is m by n diagonal matrix, non-zero elements of this matrix are square roots of eigenvalues of $\mathbf{AA^T}$.

$\mathbf{V}$ is n by n unitary matrix, $\mathbf{VV^T} = I_n$, each column of $\mathbf{V}$ is an eigenvector of $\mathbf{A^TA}$

Now, comparing the concepts of PCA and SVD, one may find out that we can perform PCA using SVD.

Let's construct a matrix p by n with our n data points such that each column of this matrix represent one data point in p-dimensional space:

$\mathbf{X} = [\mathbf{x_1} \mathbf{x_2} .... \mathbf{x_n} ]$

and make another matrix $\mathbf{X}^*$ simply by subtracting the mean of data points from $\mathbf{X}$.

$\mathbf{X}^* = \mathbf{X} - \mu_X$

Then we will get a zero-mean version of our data points for which $\mathbf{X^*} \mathbf{X^{*^T}}$ is the covariance matrix.

So, $\mathbf{\Sigma}$ and $\mathbf{U}$ give us the eigenvalues and corresponding eigenvectors of covariance matrix, respectively. We can then easily extract the desired principal components.

### MATLAB example

In this example we use different pictures of a man's face with different facial expressions. But, these pictures are noisy. So the face is not easily recognizable. The data set consists of 1965 pictures each 20 by 28. So dimensionality is 20*28= 560.

Our goal is to remove noise using PCA and of course by means of SVD function in matlab. We know that noise is not the main feature that makes these pictures look different, so noise is among least variance components. We first extract the principal components and then remove those which correspond to the least values of eigenvalues of covariance matrix. Here, eigenvalues are sorted in a descending order from column to column of matrix S. For example, the first column of matrix U, which corresponds to the first element in S, is the first principal component.

We then reconstruct the picture using first d principal components. If d is too large, we can not completely remove the noise. If it is too small, we will lose some information from the original data. For example we may lose the facial expression (smile, sadness and etc.). We can change d until we achieve a reasonable balance between noise reduction and information loss.

File:noisy-face.jpg
Noise reduced version of a picture in MATLAB example
 >> % loading the noisy data, file "noisy" stores our variable X which contains the pictures
>> load noisy;
>>
>>
>> % show a sample image in column 1 of matrix X
>> imagesc(reshape(X(:,1),20,28)')
>>
>> % set the color of image to grayscale
>> colormap gray
>>
>> % perform SVD, if X matrix if full rank, will obtain 560 PCs
>> [U S V] = svd(X);
>>
>> d= 10;
>>
>> % reconstruct X using only the first d principal components and removing noise
>> XX = U(:, 1:d)* S(1:d,1:d)*V(:,1:d)' ;
>>
>> % show image in column 1 of XX which is a noise reduced version
>> imagesc(reshape(XX(:,1),20,28)')


## PCA continued, Lagrange multipliers, singular value decomposition (SVD) (Lecture 2: Sept. 15, 2014)

### Principal component analysis (continued)

PCA is a method to reduce dimensions in data or to extract features from data.

Given a data point in vector form x the goal of PCA is to map x to y where x $\isin$ $\real$d and y $\isin$ $\real$p such that p is much less than d. For example we could map a two-dimensional data point onto any one-dimensional vector in the original 2D space or we could map a three-dimensional data point onto any 2D plane in the original 3D space.

The transformation from $d$-dimensional space to $p$ dimensional space is chosen in such a way that maximum variance is retained in the lower dimensional subspace. This is useful because variance represents the main differences between the data points, which is exactly what we are trying to capture when we perform data visualization. For example, when taking 64-dimensional images of hand-written digits, projecting them onto 2-dimensional space using PCA captures a significant amount of the structure we care about.

In terms of data visualization, it is fairly clear why PCA is important. High dimensional data is sometimes impossible to visualize. Even a 3D space can be fairly difficult to visualize, while a 2D space, which can be printed on a piece of paper, is much easier to visualize. If a higher dimensional dataset can be reduced to only 2 dimensions, it can be easily represented by plots and graphs.

In the case of many data points xi in d-dimensional space:

Xdxn = [x1 x2 ... xn]

There is an infinite amount of vectors in $\real$p to which the points in X can be mapped. When mapping these data points to lower dimensions information will be lost. To preserve as much information as possible the points are mapped to a vector in $\real$p which will preserve as much variation in the data as possible. In other words, the data is mapped to the vector in $\real$p that will have maximum variance. In the images below, the data points mapped onto u in Mapping 1 have greater variance than in Mapping 2, and thus preserves more information in the data.

The projection of a single data point x on u is uTx (a scalar).

To project n data points onto u compute Y = uTX.

The variance of Y = uTX can be calculated easily as Y is a 1 by n vector.

In higher dimensional space $\real$q where q > 1 the concept is Covariance.

The variance of projected points in the direction of vector u is given by:

Var(uTX) = uTSu where S is the sample covariance matrix of X.

In finding the first principal component the objective is to find the direction u which will have the greatest variance of the points projected onto it. This can be expressed as the following optimization problem:

maxu uTSu
s.t. uTu = 1

The restriction uTu = 1 is to bound the problem. If the length of u were not restricted, uTSu would go to infinity as the magnitude of u goes to infinity. The selection of 1 as the bound is arbitrary.

### Lagrange multipliers

The Lagrange multiplier is a method for solving optimization problems. After constraining the principal component analysis output vector to an arbitrary size the task of maximizing variance in lower dimensional spaces becomes an optimization problem.

#### Lagrange multipler theorem

The method of Lagrange multipliers states that given the optimization problem:

   $max\ f(x,y)$
$s.t.\ g(x,y) = c$


The Lagrangian is defined by:

$\ L(x,y,\lambda)= f(x,y) - \lambda(g(x,y)-c)$

Which implies, where $\bigtriangledown\$ is the gradient, that:

$\bigtriangledown\ f(x,y) = \lambda \bigtriangledown g(x,y)$

$\bigtriangledown\ f(x,y) - \lambda \bigtriangledown g(x,y) = 0$

In practice,

$\bigtriangledown\ L$ is calculated by computing the three partial derivatives of $\ L$ with respect to $\ x, y, \lambda$, setting them to 0, and solving the resulting system of equations.

That is to say, the solution is given by taking the solution of:

$\ \bigtriangledown L = \begin{bmatrix} \frac{\partial L}{\partial x}\\\frac{\partial L}{\partial y}\\\frac{\partial L}{\partial \lambda}\end{bmatrix}= 0$

which maximize the objective function f(x,y).

#### Intuitive explanation

The Lagrange multiplier utilizes gradients to find maximum values subject to constraints. A gradient $\bigtriangledown$ is a vector which collects all of the partial derivatives of a function. Intuitively, this can be thought of as a vector which contains the slopes of all of a graph's tangent lines, or the direction which each point on the graph points.

Finding a point where the gradient of f(x,y) and g(x,y) differ by a factor lambda: $\bigtriangledown f(x,y) = \lambda \bigtriangledown g(x,y)$ defines a point where our restraint and objective functions are both pointing in the same direction. This is important because if our objective function is pointing in a higher direction than g, increasing our objective function will increase our output, meaning we are not at a maximum. If the restraint is pointing in a higher direction than f, then reducing our input value will yield a higher output. The factor lambda is important because it is adjusted to ensure that the constraint is satisfied and maximized at our current location.

#### Lagrange multiplier example

Consider the function $f(x,y) = x - y$ such that $g(x,y) = x^2 + y^2 = 1$. We would like to maximize this objective function.

Calculating the Lagrangian gives

$\text{L}(x,y,\lambda) = x - y - \lambda g(x,y) = x - y - \lambda (x^2 + y^2 - 1)$

This gives three partial derivatives:

$\frac{\partial L}{\partial x} = 1 - 2\lambda x = 0, \frac{\partial L}{\partial y} = -1 - 2\lambda y = 0, \frac{\partial L}{\partial \lambda} = x^2 + y - 1 = 0$

This gives two solutions:

1. $x = \frac{\sqrt{2}}{2}, y = \frac{-\sqrt{2}}{2}, \lambda = \frac{1}{\sqrt{2}}$
2. $x = \frac{-\sqrt{2}}{2}, y = \frac{\sqrt{2}}{2}, \lambda = \frac{1}{\sqrt{2}}$

Both solutions give extreme values of our objective function. Because this is an optimization problem we are interested in the solution which gives the maximum output, which is solution 1.

### Singular value decomposition revisited

Any matrix X can be decomposed to the three matrices: $\ U \Sigma V^T$ where:

$\ U$ is an m x m matrix of the eigenvector XXT

$\ \Sigma$ is an m x n matrix containing singular values along the diagonal and zeroes elsewhere

$\ V^T$ is an orthogonal transpose of a n x n matrix of the eigenvector XTX

#### Connection of Singular Value Decomposition to PCA

If X is centered (has a mean of 0) then XXT is the covariance matrix.

Using the SVD of X we get:

$\ XX^T = (U \Sigma V^T)(U \Sigma V^T)^T$

$\ XX^T = (U \Sigma V^T)(V \Sigma U^T)$

$\ XX^T = U \Sigma V^T V \Sigma U^T$

$\ XX^T = U \Sigma I \Sigma U^T$ (By the definition of V)

$\ XX^T = U \Sigma^2 U^T$

It is important to remember that U and V are unitary matricies, i.e.: $U^TU=UU^T=I$ and $V^TV=VV^T=I$, each equality using the appropriate dimensional identity matrix.. It is also interesting that the nonsquare matrix formed by taking a subset of the eigenvectors contained in $U$ or $V$ has the property that $V^TV=I$ (the appropriate lower dimensional identity matrix)

### Matlab example of dimensionality reduction using PCA

The following example shows how principal component analysis can be used to identify between a handwritten 2 and a handwritten 3. In the example the original images are each 64 pixels and thus 64 dimensional. By reducing the dimensionality to 2D the 2's and 3's are visualized on a 2D plot and can it could be estimated as to which digit one was.

 >> % load file containing 200 handwritten 2's and 200 handwritten 3's
>> load 2_3;
>>
>> who?
>> % displays that X is 64x400 matrix containing the handwritten digits
>>
>> imagesc(reshape(X(:,1),8,8)')
>> colormap gray
>> % displays first image, a handwritten 2
>>
>> M = mean(X,2)*ones(1,400);
>> Xb = X-M
>> % M is the matrix of the mean of column vectors repeated 400 times
>> % Xb is data centred about mean
>>
>> % Now use SVD to find eigenvectors, eigenvalues of Xb
>> [U S V] = svd(Xb);
>>
>> % 2D projection on points
>> Y = U(:,1:2)'*X;
>>
>> % 2D projections of 2's
>> plot(Y(1,1:200),Y(2,1:200),'.')
>>
>> % 2D projections of 3's
>> plot(Y(1,201:400),Y(2,201:400),'.')


## PCA Algorithm, Dual PCA, and Kernel PCA (Lecture 3: Sept. 17, 2014)

### PCA Algorithm (Algorithm 1)

#### Recover basis

To find the basis for PCA we first center the data by removing the mean of the sample, then calculate the covariance:

$XX^T = \sum_{i=1}^n x_i x_i^T$

Let $U$ be the eigenvectors of $XX^T$ corresponding to the top $\,d$ eigenvalues.

#### Encode training data

To encode our data using PCA we let

$\,Y = U^TX$

where $Y$ is an encoded matrix of the original data

#### Reconstruct training data

To project our data back to the higher dimension

$\hat{X} = UY = UU^T X$

#### Encode test example

$\,y = U^T x$

where $\,y$ is an encoding of $\,x$

#### Reconstruct test example

$\hat{x} = Uy = UU^T x$

### Dual principal component analysis

#### Motivation

In some cases the dimension of our data can be much larger than the number of data points $( d \gt \gt n )$. For example, suppose we are interested in weekly closing rates of the Dow Jones Industrial Average, S&P 500 Index, and the NASDAQ Composite Index over the past forty years. In this example we only have three data points $( n = 3 )$, but our dimension is over two thousand! $( d \approx 2080 )$ Consequently, the matrix $XX^T$ used to find the basis $U$ is $2080\times2080$, which is computationally heavy to calculate. As another example is studying genes for patients since the number of genes studied is likely much larger than the number of patients. When the data's dimensionality is high, a less compute-intensive way of recovering the basis is desirable.

Recall from Lecture 2 how we used singular value decomposition for PCA:

$\,X = U \Sigma V^T$

Multiply both sides by V to get:

$\,X V = U \Sigma V^T V$

Since $V^T V = I$ (the identity matrix) the SVD can be rearranged as

$\,X V = U \Sigma$

Then, multiply both sides by $\Sigma$-1 to get:

$\,U = X V \Sigma$-1

And so $U$ can be calculated in terms of $V$ and $\Sigma$. $V$ is the eigenvectors of $X^T X_{n \times n}$, so it is much easier to find than $U$ because $n\lt \lt d$.

#### Algorithm

We can replace all instances of $\,U$ in Algorithm 1 with the dual form, $\,U = X V \Sigma^{-1}$, to obtain the algorithm for Dual PCA.

Recover Basis:
For the basis of Dual PCA, we calculate

$XX^T$

then let $V$ be the eigenvectors of $XX^T$ with respect to the top $d$ eigenvalues. Finally we let $\Sigma$ be the diagonal matrix of square roots of the top $d$ eigenvalues.

Encode Training Data:
Using Dual PCA to encode the data, let

$\,Y = U^TX = \Sigma V^T$

where $Y$ is an encoded matrix of the original data

Reconstruct Training Data:
Project the data back to the higher dimension by

$\hat{X} = UY = U \Sigma V^T = X V \Sigma^{-1} \Sigma V^T = X V V^T$

Encode Test Example:

$\,y = U^T x = \Sigma^{-1} V^T X^T x = \Sigma^{-1} V^T X^T x$

where $\,y$ is an encoding of $\,x$

Reconstruct Test Example:

$\hat{x} = Uy = UU^T x = X V \Sigma^{-2} V^T X^T x = X V \Sigma^{-2} V^T X^T x$

Note that the steps of Reconstructing training data and Reconstructioning test example still depend on $d$, and therefore still will be impractical in the case that the original dimensionality of the data ($d$) is very large

### Kernel principle component analysis

#### Kernel methods

Kernel methods are a class of algorithms used for analyzing patterns and measuring similarity. They are used for finding relations within a data set. Kernel methods map data into a higher-dimensional space to make it easier to find these relations. While PCA is designed for linear variabilities, a lot of higher-dimensional data are nonlinear. This is where Kernel PCA can be useful to help model the data variability when it comes to a nonlinear manifold. Generally speaking, Kernels is a measure of "similarity" between data points without explicitly performing the dot product between data points pairs. This can be done using different methods. For example, the dot product looks at whether vectors are orthogonal. The Gaussian kernel looks at whether two objects are similar on a scale of 0 to 1. Kernels also measure similarity between trees. We will not go in too much detail at the moment.

Some common examples of kernels include:

1. Linear kernel:
$\, k_{ij} = \lt x_{i},x_{j}\gt$
2. Polynomial kernel:
$\, k_{ij} = (1+\lt x_{i},x_{j}\gt )^P$
3. Gaussian kernel:
$\, k_{ij} = exp(||x_{i}-x_{j}||^2/2\sigma^2)$

Other kernel examples can be found here:

#### Kernel PCA

With the Kernel PCA, we take the original or observed space and map it to the feature space and then map the feature space to the embedded space of a lower dimension.

$X_{observed space} \rightarrow \Eta_{feature space} \rightarrow Y_{embedded space}$

In the Kernel PCA algorithm, we can use the same argument as PCA and again use the Singular Value Decomposition (SVD) where:

$\, \Phi(X) = U \Sigma V^T$

where $U$ contains the eigenvectors of $\Phi(X)\Phi(X)^T$

The algorithm is similar to the Dual PCA algorithm except that the training and the test data cannot be reconstructed since $\Phi(X)$ is unknown.

#### Algorithm

Recover Basis:
For the basis of Kernel PCA, we calculate

$\, K = \Phi(X)^T \Phi(X)$

using the kernel $K$ and let $V$ be the eigenvectors of $\Phi(X)^T \Phi(X)$ with respect to the top $p$ eigenvalues. Finally we let $\Sigma$ be the diagonal matrix of square roots of the top $p$ eigenvalues.

Encode Training Data:
Using Kernel PCA to encode the data, let

$\,Y = U^T \Phi(X) = \Sigma V^T$

Recall $U = \Phi(X) V \Sigma^{-1}$:

$\,Y = U^T \Phi(X) = \Sigma^{-1} V^T \Phi(X)^T \Phi(X) = \Sigma^{-1} V^T K(X,X)$

where $Y$ is an encoded matrix of the image of the original data in the function $\Phi$. $Y$ is computable without knowledge of $\Phi(X)$ via the kernel function.

Reconstruct Training Data:

$\hat{\Phi(X)} = UY = UU^T\Phi(X)=\Phi(X)V\Sigma^{-1}\Sigma V^T=\Phi(X) V V^T$
$\hat{X} = \Phi^{-1}(\hat{\Phi(X)})$

However, $\Phi(X)$ is unknown so we cannot reconstruct the data.

Encode Test Example:

$\,y = U^T \Phi(x) = \Sigma^{-1} V^T \Phi(X)^T \Phi(x) = \Sigma^{-1} V^T K(X,x)$

This is possible because we know that $\Phi(X)^T \Phi(X) = K(X,x)$.
Recall
$U = \Phi(X) V \Sigma^{-1}$:
$\Phi(X)$ is the transformation of the training data $X_{d \times n}$ while $\Phi(x)$ is the transformation of the test example $x_{d \times 1}$

Reconstruct Test Example:

$\hat{\Phi(x)} = Uy = UU^T \Phi(x) = \Phi(X) V \Sigma^{-2} V^T \Phi(X)^T \Phi(x) = \Phi(X) V \Sigma^{-2} V^T K(X,x)$

Once again, since $\Phi(X)$ is unknown, we cannot reconstruct the data.

## Centering for kernel PCA and multidimensional scaling (Lecture 4: Sept. 22, 2014)

### Centering for kernel PCA

In the derivation of kernel PCA we assumed that $\sum_{i=1}^n\Phi(x_i)=0$ which is improbable for a random data set and an arbitrary mapping $\Phi$. We must be able to ensure this condition though we don't know $\Phi$.

Let $\mu=\frac{1}{n}\sum_{j=1}^n\Phi(x_j)$

Define the centered transformation (not computable)

$\tilde{\Phi}:x\mapsto(\Phi(x)-\mu)$

and the centered kernel (computable)

$\tilde{K}:(x,y)\mapsto \tilde{\Phi}(x)^T \tilde{\Phi}(y)$

Then

$\tilde{K}(x,y) = (\Phi(x)-\mu)^T (\Phi(y)-\mu)=\Phi(x)^T\Phi(y)-\mu^T\Phi(x)-\mu^T\Phi(y)+\mu^T\mu$

$=K(x,y)-\frac{1}{n}\sum_{j=1}^n\Phi(x_j)^T\Phi(x)-\frac{1}{n}\sum_{j=1}^n\Phi(x_j)^T\Phi(y)+\frac{1}{n^2}\sum_{j=1}^n\sum_{i=1}^n\Phi(x_j)^T\Phi(x_i)$
$=K(x,y)-\frac{1}{n}\sum_{j=1}^n K(x_j,x)-\frac{1}{n}\sum_{j=1}^n K(x_j,y)+\frac{1}{n^2}\sum_{j=1}^n\sum_{i=1}^n K(x_j,x_i)$
$=K(x,y)-\frac{1}{n}\bold{1}^TK(X,x)-\frac{1}{n}\bold{1}^TK(X,y)+\frac{1}{n^2}\bold{1}^TK(X,X)\bold{1}$
where $\bold{1}=([1,1,...,1]_{1\times n})^T$

Here $X_{d\times n}$ and $\{x_j\}_{j=1}^n$ denote the data set matrix and elements respectively and $x,y$ represent test elements at which the centered kernel is evaluated.

In practice, with $\, X = [x_1 | x_2 | ... | x_n]$ as the data matrix, the Kernel Matrix, denoted by $K$, will have elements $\, [K]_{ij} = K(x_i,x_j)$.

We evaluate $\tilde{K} = K - \frac{1}{n}\bold{1}^TK-\frac{1}{n}K\bold{1}+\frac{1}{n^2}\bold{1}^TK\bold{1}$, and find the eigenvectors/eigenvalues of this matrix for Kernel PCA.

### Multidimensional scaling

Multidimensional Scaling (MDS) is a dimension reduction method that attempts to preserve the pair-wise distance, $d_{ij}$, between data points. For example, suppose we want to map a set of points X, $x \in \mathbb{R}^d$, to a set of points Y, $y \in \mathbb{R}^p$. We want to preserve the pair-wise distances such that $d(x_i, x_j)$ is approximately the same as $d(y_i, y_j)$.

#### MDS example

We can map points in 2 dimensions to points in 1 dimension by projecting the points onto a straight line:

Note that the pair-wise distances are relatively preserved. However, the triangle inequality is not preserved:

$\, d(x_1,x_2)+d(x_2,x_3) \gt d(x_1, x_3)$, but $\, d(y_1,y_2)+d(y_2,y_3) = d(y_1, y_3)$

Another example of when it's crucial to preserve the distances between data points in lower dimension space (MDS) is when dealing with "distances between cities", as shown in here

#### Distance matrix

Given a $d x n$ matrix, $X$, the distance matrix is defined by:

$D^X_{n \times n} = \begin{bmatrix} d^{(X)}_{11} & d^{(X)}_{12} & ... & d^{(X)}_{1n}\\d^{(X)}_{21} & d^{(X)}_{22} & ... & d^{(X)}_{2n}\\...\\d^{(X)}_{n1} & d^{(X)}_{n2} & ... & d^{(X)}_{nn} \end{bmatrix}$

Where $d^{(X)}_{ij}$ is defined as the euclidean distance between points $x_i$ and $x_j$ in $\mathbb{R}^d$

Important properties:

• The matrix is symmetric
• ie $d^{(X)}_{ij} = d^{(X)}_{ji} \; \forall \; 1 \leq i,j \leq n$
• The matrix is always non-negative
• ie $d^{(X)}_{ij} \ge 0 \; \forall \; 1 \leq i,j \leq n$
• The diagonal of the matrix is always 0
• ie $d^{(X)}_{ii}= 0 \; \forall \; 1 \leq i \leq n$
• If $x_i$'s don't overlap, then all elements that are not on the diagonal are positive

Triangle inequality: $d_{ab} + d_{bc} \gt = d_{ac}$ whenever $a, b, c$ are three different indices

For any mapping to matrix $Y_{p \times n}$, we can calculate the corresponding distance matrix

$D^Y_{n \times n} = \begin{bmatrix} d^{(Y)}_{11} & d^{(Y)}_{12} & ... & d^{(Y)}_{1n}\\d^{(Y)}_{21} & d^{(Y)}_{22} & ... & d^{(Y)}_{2n}\\...\\d^{(Y)}_{n1} & d^{(Y)}_{n2} & ... & d^{(Y)}_{nn} \end{bmatrix}$

Where $d^{(Y)}_{ij}$ is defined as the euclidean distance between points $y_i$ and $y_j$ in $\mathbb{R}^p$

Note that a kernel matrix is a positive semidefinite matrix.

#### Finding the optimal Y

MDS minimizes the metric

$\text{min}_Y \sum_{i=1}^n{\sum_{j=1}^n{(d_{ij}^{(X)}-d_{ij}^{(Y)})^2}}$

Where $d_{ij}^{(X)} = ||x_i - x_j||$, and $d_{ij}^{(Y)} = ||y_i - y_j||$

The distance matrix $D^{(X)}$ can be converted to a kernel matrix:

$K = -\frac{1}{2}HD^{(X)}H$

Where $H = I - \frac{1}{n}ee^T$ and $e=\begin{bmatrix} 1 \\ 1 \\ ... \\ 1 \end{bmatrix}$

Here, K is a kernel matrix implies that K is positive semi-definite, so $x^TKx \geq 0$ $\forall x$

In fact, there is a one-to-one correspondence between symmetric matrices and kernel matrices

#### Theorem

If $D^{(X)}$ is a distance matrix and if $K=-\frac{1}{2}HD^{(X)}H$, then $D^{(X)}$ is Euclidean iff $K$ is Positive Semi-Definite,
where $H=I-\frac{1}{n}\bold{1}^T\bold{1}$ and $\bold{1}=([1,1,...,1]_{1\times n})^T$

#### Relation between MDS and PCA

As has already been shown, the goal of MDS is to obtain $Y$ by minimizing $\text{min}_Y \sum_{i=1}^n{\sum_{j=1}^n{(d_{ij}^{(X)}-d_{ij}^{(Y)})^2}}$
K is positive semi-definite so it can be written $K = X^TX$. So

$\text{min}_Y \sum_{i=1}^n{\sum_{j=1}^n{(d_{ij}^{(X)}-d_{ij}^{(Y)})^2}} =\text{min}_Y(\sum_{i=1}^n\sum_{j=1}^n(x_i^Tx_j-y_i^Ty_j)^2)$

and

$\text{min}_Y(\sum_{i=1}^n\sum_{j=1}^n(x_i^Tx_j-y_i^Ty_j)^2)=\text{min}_Y(\text{Tr}((X^TX-Y^TY)^2))$

From SVD we may make the decompositions $\, X^TX=V\Lambda V^T$ and $Y^TY=Q\hat{\Lambda}Q^T$

$\text{min}_Y(\text{Tr}((X^TX-Y^TY)^2))=\text{min}_Y(\text{Tr}((V\Lambda V^T-Q\hat{\Lambda}Q^T)^2))=\text{min}_Y(\text{Tr}((\Lambda-V^TQ\hat{\Lambda}Q^TV)^2))$

Let $G=Q^TV$

$\text{min}_{G,\hat{\Lambda}}(\text{Tr}((X^TX-Y^TY)^2))=\text{min}_{G,\hat{\Lambda}}(\text{Tr}((\Lambda-G^T\hat{\Lambda}G)^2))=\text{min}_{G,\hat{\Lambda}}(\text{Tr}(\Lambda^2+G^T\hat{\Lambda}GG^T\hat{\Lambda}G-2\Lambda G^T\hat{\Lambda}G))$

$=\text{min}_{G,\hat{\Lambda}}(\text{Tr}(\Lambda^2+G^T\hat{\Lambda}^2G-2\Lambda G^T\hat{\Lambda}G))$
It can be shown that for fixed $\hat{\Lambda}$ that the minimum over $G$ occurs at $G=I$, which implies that

$\text{min}_{G,\hat{\Lambda}}(\text{Tr}((X^TX-Y^TY)^2))=\text{min}_{\hat{\Lambda}}(\text{Tr}(\Lambda^2-2\Lambda\hat{\Lambda}+\hat{\Lambda}^2)=\text{min}_{\hat{\Lambda}}(\text{Tr}((\Lambda-\hat{\Lambda})^2)$

Since $\, \Lambda$ and $\hat{\Lambda}$ are diagonal matrices containing the eigenvalues of $X^TX,\text{ and } Y^TY$ respectively, and all said eigenvalues are positive, then the min is clearly obtained when the $p$ nonzero diagonal elements in $Y^TY$ are the $p$ largest eigenvalues in $X^TX$

## Finishing comments on MDS and Introduction to ISOMAP and LLE (Lecture 5: Sept. 29, 2014)

### ISOMAP

In Multidimensional scaling we seek to find a low dimensional embedding $Y$ for some high dimensional data $X$, minimizing $\|D^X - D^Y\|^2$ where $\, D^Y$ is a matrix of euclidean distances of points on $Y$ (same with $D^X$). In Isomap we assume the high dimensional data in $X$ lie on a low dimensional manifold, and we'd like to replace the euclidean distances in $D^X$ with the distance on this manifold. Obviously we don't have information about this manifold, so we approximate it by forming the k-nearest-neighbour graph between points in the dataset. The distance on the manifold can then be approximated by the shortest path between points. This is called the geodesic distance, denoted $D^G$, and is useful because it allows for nonlinear patterns to be accounted for in the lower dimensional space. From this point the rest of the algorithm is the same as MDS.

It worth noting that as $D^{G}$ is not an Euclidean distance so the matrix $K = -\frac{1}{2}HD^{(G)}H$ is not guaranteed to be positive semi-definite matrix. This is handled in the implementation [3] by considering only the positive value of the real part only, which can be considered as projecting the matrix onto the cone of nearest positive semi-definite matrix.

It is important to note that k determines "locality". Isomap captures local structure and "unfolds" the manifold.

The swiss roll manifold is a commonly discussed example of this. Note that the Euclidean distance between points A and B drastically differs from the Geodesic distance.

#### ISOMAP Results

A Global Geometric Framework for Nonlinear Dimensionality Reduction paper proposed ISOMAP, the below figures show some images for ISOMAP results. The first image show the faces different orientation images dataset, as shown in the figure the x axis captures the horizontal orientation, while the y axis captures the vertical orientation and the bar captures the lightning direction. The second image shows the 2 handwritten digit and ISOMAP results in this dataset.

### Local Linear Embedding (LLE)

Local Linear Embedding (LLE) is another technique for dimensionality reduction that tries to find a mapping from high dimensional space to low dimensional space such that local distances between points are preserved. LLE reconstructs each point based on its k-nearest neighbour.

It optimizes the equation $\mathbf{x_i-w_1x_{i1}+w_2x_{i2}+...+w_kx_{ik}}$ for each $x_i \in X$ with corresponding k-th nearest neighbour set $\mathbf{\, {x_{i1},x_{i2},...,x_{ik}}}$. LLE is therefore an optimization problem for:
$\mathbf{ \text{min}_W \sum_{i=1}^n{(x_i-\sum_{j=1}^n{w_{ij}x_j)^2}}}$

such that $\mathbf{\sum_{j=1}^n w_{ij} = 1}$

#### Algorithm

• Find the k-nearest neighbour for each data point
• Compute $\mathbf{W_{nxn}}$ through $\mathbf{\text{min}_W \sum_{i=1}^n{(x_i-\sum_{j=1}^n{W_{ij}x_j)^2}}}$ such that $\mathbf{\sum w_{ij} = 1}$ (which means that it's convex optimization of the points) where $\mathbf{x_j}$ are the k-nearest neighbours of $\mathbf{x_i}$
• Let $\mathbf{V_i = [x_j | x_j}$ is a neighbour of $\mathbf{ x_i ]_{dxk}}$ Let $\mathbf{w(i)}$ be all k weights of point $\mathbf{x_i}$ (ie row i of W with all zeroes removed) Then:
$\mathbf{V_i*w(i) = \sum_{j=1}^n{w_{ij}x_j}}$
and hence we need to optimize
$\mathbf{\, \text{min}_{w(i)} |x_i-V_i*w(i)|^2}$ under the constraint $\mathbf{\sum_{j=1}^n{w_{ij}}=1}$
This is a separable optimization problem which can be solved for for each point independently, which can be simplified to
$\mathbf{min|x_ie_k^Tw(i)-V_iw(i)|^2 }$, where $e=\begin{bmatrix} 1 \ 1 \ ... \ 1 \end{bmatrix}^T$.
Now we have
$\mathbf{\, |x_ie_k^Tw(i)-V_iw(i)|^2 =(x_ie^Tw(i)-V_iw(i))^T(x_ie^Tw(i)-V_iw(i))= w(i)^T(x_ie^T-V_i)^T(x_ie^T-V_i)w(i) }$

Let $\mathbf{\, G =(x_ie^T-V_i)^T(x_ie^T-V_i) }$. Therefore we can instead simplify
$\mathbf{ min \;w(i)^TGw(i)}$ under the constraint $\mathbf{\, w(i)^Te=1}$

• Use Lagrange Multipliers to optimize the above equation:
$\mathbf{\, L(w(i),\lambda)=w(i)^TGw(i)-\lambda(w(i)^Te-1) }$

$\mathbf{\frac{\partial L}{\partial {w(i)}}=2Gw(i)-\lambda e=0 }$ which implies $\mathbf{ Gw(i)=\frac{\lambda}{2}e}$ and hence $\mathbf{ w(i)=\frac{\lambda}{2}G^{-1}e }$
Note that we do not need to compute $\mathbf{\lambda}$, but can just compute $\mathbf{\, G^{-1}e}$, then re-scale so $\mathbf{\, w(i)^Te=1}$
• Finally, solve $\mathbf{\text{min}_Y \sum_{i=1}^n{(y_i-\sum_{j=1}^t{w_{ij}y_j)^2}}}$ where w is known and y is unknown

Notes:

$G_{d*d}$ is invertible under some conditions depending on $k$, as it must have rank $d$ so $k$ must be greater than or equal $d$

LLE doesn't work for out of sample case there are extensions for it to handle this as in this [4]

## LLE continued, introduction to maximum variance unfolding (MVU) (Lecture 6: Oct. 1, 2014)

### LLE

Recall the steps of LLE can be summarized as follows:

1. For each data point, find its $k$ nearest neighbours for some fixed $k$.
2. Find reconstruction weights $W$, minimizing
$\sum_{i=1}^n ||x_i - \sum_{j=1}^n w_{ij}x_j||^2$.
3. Reconstruct the low dimensional embedding with respect to these weights.
That is, find low dimensional $Y$ minimizing
$\sum_{i=1}^n||y_i - \sum_{j=1}^nw_{ij}y_j||^2$.

The previous lecture covered how to find the weights $W$. Here we will discuss how to find the low-dimensional embedding $Y$ using these weights.

First, we want to write the previous equation in a matrix form.
Consider that $Y$ is a $p*n$ matrix and $I_{:i}$ is the $i^{th}$ column ($n*1$) in the identity matrix (like Matlab notation)

Then the sample, $i$, can be represented as follows: $y_i = Y * I_{:i} = \begin{bmatrix} y_1, y_2 ,... y_i,..., y_n \end{bmatrix} \begin{bmatrix} 0 \\0 \\... \\ 1 \\ ... \\0 \end{bmatrix}$

Note that: $I_{i:} Y^T = Y^T$ where $I_{:i}$ is the $i^{th}$ row from the identity matrix

Similarly, $\sum w_{ij} y_i = w_{i:} Y^T$

So the objective is equivalent to minimizing $\sum_{i=1}^n||I_{i:} Y^T - W_{i:}Y^T||^2 = ||IY^T - WY^T||^2$

But first, to prevent the trivial solution of the 0-matrix, we add the constraint that $\frac{1}{n}Y^TY = I$. Now, the objective can be rewritten as minimizing:

$||Y^T - WY^T||^2_F$

where the above norm is the Frobenius norm, which takes the square root of the sum of squares of all the entries in the matrix. Recall this norm can be written in terms of the trace operator:

$||Y^T - WY^T||^2_F = ||Y^T (I - W)||^2_F = Tr((Y^T (I - W)) (Y^T (I - W))^T)= Tr(Y^T(I-W)(I-W)^TY)$

By letting $\, M=(I-W)(I-W)^T$, the objective function will be

min $Tr(Y^T M Y)$

This minimization has a well-known solution: We can set the rows of $Y$ to be the $d$ lowest eigenvectors of $M$, where $d$ is the dimension of the reduced space.

$L = (Y, \Lambda) = Tr(Y^TMY) - Tr(\Lambda (\frac{1}{n} Y^T Y - I)))$

$\frac{d}{dy} L(Y, \Lambda) = 2MY- \Lambda \frac{2}{n} Y$

Setting this equal to zero and solving gives

$MY = \frac{1}{n} \Lambda Y$

$MY = \Lambda^* Y$

If we want to reduce the dimensions to p, we need to choose p + 1 smallest eigenvectors. This is because the matrix has the first eigenvalue equal to 1 since it is a Laplacian (the number of $0$ eigenvalues of a Laplacian is the number of connected components).

### Comparison between Dimensionality Reduction Approaches

Approach Solution\Eigenvectors of
PCA $\mathbf{ X^T * X }$ OR $\mathbf{ \Sigma V^T }$
MDS $\mathbf{ - \frac{1}{2} (I - \frac{1}{n} e e^T) D (I - \frac{1}{n} e e^T) }$ aka $\mathbf{H D H}$ where $H=(I - \frac{1}{n} e e^T)$
ISOMAP $\mathbf{ - \frac{1}{2} (I - \frac{1}{n} e e^T) D^G (I - \frac{1}{n} e e^T) }$
LLE $\mathbf{ (I - W)(I - W)^T }$
KPCA $\mathbf {K}$

From the above table we can interpret all of these algorithms as kernel PCA more details are presented here [5].

Note that PCA, MDS, ISOMAP and KPCA choose the largest eigenvalue/eigenvector pairs for embedding while LLE chooses the smallest pair. However, in LLE, if we let $\lambda_{max}$ be the largest eigenvalue of $\mathbf{L = (I-W)^T(I-W)}$ and replace the matrix $\mathbf{ (I - W)(I - W)^T }$ with $\mathbf{ \lambda_{max}I - (I - W)(I - W)^T }$, the largest eigenvalue/eigenvector pairs will be chosen instead.

### Maximum variance unfolding

In maximum variance unfolding (MVU), as in isomap and LLE, we assume that the data is sampled from a low-dimensional manifold that is embedded within a higher-dimensional Euclidean space. The aim is to "unfold" this manifold in a lower-dimensional Euclidean space such that the local structure of each point is preserved. We would like to preserve the distance from any point to each of its $k$ nearest neighbours, while maximizing the pairwise variance for all points which are not its neighbours. The MVU algorithm uses semidefinite programming to find such a kernel.

#### Optimization constraints

For a kernel $\Phi$ such that $x \mapsto \phi(x)$, we have the following 3 constraints:

1. $\Phi$ must be centred, i.e. $\sum\limits_{i,j} k_{ij} = 0$.
2. $k$ should be positive semi-definite, i.e. $k \succeq 0$.
3. The local distance should be maintained between data and its embedding.
This means that for $i, j$ such that $x_i, x_j$ are neighbours, we require that $\Phi$ satisfies
:$\, | x_i - x_j |^2 = |\phi(x_i) - \phi(x_j)|^2$
$\, = [\phi(x_i) - \phi(x_j)]^T [\phi(x_i) - \phi(x_j)]$
:$\, =\phi(x_i)^T \phi(x_i) - \phi(x_i)^T \phi(x_j) - \phi(x_j)^T \phi(x_i) + \phi(x_j)^T \phi(x_j)$
:$\, =k_{ii} - k_{ij} - k_{ji} + k_{jj}$
:$\, =k_{ii} -2k_{ij} + k_{jj}$
where $\, K=\phi^T \phi$ is the $n \times n$ kernel matrix.
We can summarize this third constraint as:
:$\, n_{ij} ||x_i - x_j||^2 = n_{ij} || \phi(x_i) - \phi(x_J) ||^2$
where $n_{ij}$ is the neighbourhood indicator, that is, $n_{ij} = 1$ if $x_i, x_j$ are neighbours and $0$ otherwise.

#### Optimization problem

The ideal optimization problem would be to minimize the rank of the matrix K, but unfortunately that is not a convex problem so we cannot solve it. Recall that the eigenvalue corresponding to an eigenvector represents the variance in the direction of that eigenvector. So the total variance is the sum of the eigenvalues, which equals the trace of K. Since we would like to maximize the variance of non-neighbour points, we can use the objective function

$\,\text{max tr}(K)$

and thus the problem can be solved using semidefinite programming.

#### MVU Method

1. Construct the neighbourhood graph using k-nearest neighbours
2. Find the kernel K by solving the SDP problem: max $Tr(K)$ such that
$K_{ii}+K_{jj}-2K_{ij} = D_{ij}, \sum_{ij} K_{ij} = 0, K \succeq 0$
3. Apply Kernel PCA on the learned kernel K

## Action Respecting Embedding and Spectral Clustering (Lecture 7: Oct. 6, 2014)

### SDE Paper Notes

Unsupervised Learning of Image Manifolds by Semidefinite Programming paper introduced the maximum variance unfolding technique. The results of the paper compared the proposed technique against existing dimensionality reduction techniques. The first figure compares SDE against PCA in faces dataset. To maintain the whole dataset variation we need 5 dimensions by using SDE while we need 10 dimensions using PCA. The second figure compares SDE against ISOMAP in non-convex dataset. Also, to maintain the whole dataset variation we need only 2 dimensions when we use SDE while we need more than 5 dimensions by using ISOMAP. The two figures show that SDE can represent data variation using much less number of eigenvectors than PCA or ISOMAP.

### Action Respecting Embedding

When we are dealing with temporal data, in which the order of data is important, we could consider a technique called Action Respecting Embedding. Examples include stock prices, which are time series, or a robot that moves around and takes pictures at different positions. Usually there are actions between data points. There can be many types of actions. For example, consider a robot taking pictures at different positions. Actions include stepping forward, stepping backward, stepping right, turning, etc. Consider a financial time series. Actions include stock splits, interest rate reduction, etc. Consider a patient under a treatment. Actions include taking a specific step in the treatment or taking a specific drug, which affects blood sugar level, cholesterol, etc.

Some information cannot be captured in the form of similarities.

Action Respecting Embedding takes a set of data points $\, x_{1},...,x_{n}$ along with associated discrete actions $\, a_{1},...,a_{n-1}$. The data points are assumed to be in temporal order, where $\, a_{i}$ was taken between $\, x_{i}$ and $\, x_{i+1}$.

Now we realize that intuitively, if two points have the same action, then the distance between them should not change after the action. There are only three types of transformations that do not change the distance between points. They are rotation, reflection, and translation. Since rotation and reflection have the same properties, we will call both of these rotation for our purposes. Any other transformation does not preserve distances. Note that we are able to model any action in low dimensional space by rotation, translation or any linear combination of them.

### Optimization Problem

Recall that kernel function satisfies the property: $\, |\phi(x_i) - \phi(x_j)|^2 = | x_i - x_j |^2$. We can formulate the idea of distance preserving mathematically as $\, |\phi(x_i) - \phi(x_j)|^2 = |\phi(x_i+1) - \phi(x_j+1)|^2$, if $\, x_i, x_j$ are points that perform the same type of transformations.

The maximum variance unfolding problem is

$\, \text{max tr}(K)$
such that $\, |\phi(x_i) - \phi(x_j)|^2 = k_{ii} -2k_{ij} + k_{jj}$
$\sum\limits_{i,j} k_{ij} = 0$
$k \succeq 0$

The Action Respecting Embedding problem is similar to the maximum variance unfolding problem, with another constraint:

For all i, j, $a_{i} = a_{j}$,

$\,k_{(i+1)(i+1)} - 2k_{(i+1)(j+1)} + k_{(j+1)(j+1)} = k_{ii} - 2k_{ij} + k_{jj}$

### A Simple Example

Consider the two examples presented in the ARE paper [6] of a robot moving around taking pictures with some action between them. The input is a series of pictures with actions between them. Note that there are no semantics for the actions, only labels

The first example is a robot that takes 40 steps forward then takes a 20 steps backward. The following figure (which is taken from ARE paper [7] shows the action taking by the robot and the results of ARE and SDE. We can see that ARE captures the trajectory of the robot. Although ARE doesn't have semantics for each action it captured an essential property of these actions that they are opposite in direction and equal in magnitude. We could look at the position on manifold vs. time for a robot that took 40 steps forward and 20 steps backward. We could also look at the same analysis for a robot that took 40 steps forward and 20 steps backward that are twice as big.

Another complicated action and the result of ARE and SDE are shown in the next figure (taken from ARE paper [8]), where we can see that ARE captured the trajectory of the robot.

Sometimes it's very hard for humans to recognize what happened by merely looking at the data, but this algorithm will make it easier for you to see exactly what happened.

### Types of Applications

A common type of application is planning. This is when we need to find a sequence of decisions which achieve a desired goal, for example, coming up with an optimal sequence of actions to get from one point to another.

For example, what type of treatment should I give a patient in order to get from a particular state to a desired state?

For each action $a$, we have a collection of data points $\, (y_{t},y_{t+1})$ such that $\, y_{t} \to y_{t+1}$.

Through the algorithm, we learn $\, f_{a}$: $\, f_{a}(y_{t}) = A_{a}y_{t} + b_{a} = y_{t+1}$ where $A_{a}$ is that rotation matrix for action $a$, and $b_{a}$ is the translation vector vector for action $a$. We can estimate $A_{a}$ and $b_{a}$. These can be solved in closed form so that we will know what happens to any input $(y_t)$ if action $a$ is taken.

### Spectral Clustering

Clustering is the task of grouping data so that elements in the same group are more similar (or dissimilar) to each other than to the elements in other groups. In spectral clustering, we represent first the data as a graph and then perform clustering.

For the time being, we will only consider the case where there are two clusters.

Given $\{x_i\}^{n}_{i=1}$ and notation of similarity or dissimilarity, we can represent the data as a weight undirected graph $\,G=(V,E)$ where V represents the vertices (the data) and E represents the edges (similarity or dissimilarity between two data).

Using intuition, we can see that the goal is to minimize the sum of the weights of the edges that we cut (i.e. edges we remove from G until we have a disconnected graph)
$cut(A,B)=\sum_{i\in A, j\in B} w_{ij}$

where A and B are the partitions of the vertices in G made by a cut, and $\,w_{ij}$ is the weight between points i and j.

Consider the following graphs of data:

Although "cut" may minimize the edge weights that we use to partition, note that using cut(A,B) may cause many edges to be cut, but which have smaller weights. For example, if there is one datum that is very dissimilar to the rest of the data, we may end up partitioning this one point to itself despite it resulting in many edges being cut. This is the case in Example 3 above. This occurrence may cause an imbalance in partition sizes. To avoid this problem, we minimize the ratio-cut.

We define ratio-cut as: $ratiocut = \frac{cut(A, \bar{A})}{|A|} + \frac{cut(\bar{A}, A)}{|\bar{A}|}$ where $\, |A|$ is the number of points in set $\, A$ and $\, \bar{A}$ is the complement of $\, A$.

This equation will be minimized if there is a balance between set $\, A$ and $\bar{A}$. This, however, is an NP Hard problem; we must relax the constraints to arrive at a computationally feasible solution.

Let $f \in \mathbb{R}^n$ such that

$f_{i} = \left\{ \begin{array}{lr} \sqrt{\frac{|\bar{A}|}{|A|}} & i \in A\\ -\sqrt{\frac{|A|}{|\bar{A}|}} & i \in \bar{A} \end{array} \right.$

We will show in the next lecture that minimizing $\, \sum_{ij}w_{ij}(f_{i}-f_{j})^2$ is equivalent to minimizing the ratio-cut.

## Spectral Clustering (Lecture 8: Oct. 8, 2014)

### Problem Formulation and Notations

Let $\, G=( V, E)$ be a graph, where V is the set of vertices and E is the set of edges.

In graph theory, partitioning the set of vertices of a graph into two disjoint sets is called a cut. Consequently the set of edges which has one end point in each subset (partition) is called a cut-set, which cross the cut.

In a connected graph each cut-set determines a unique cut. So, in some cases cuts are identified with their cut-sets rather than with their vertex partitions.

A cut $\, C = ( A, B )$ is a partition of $V$ of a graph $\, G = (V, E)$ into two subsets $A$ and $B$. The cut-set of a cut $\, C = ( A, B )$ is the set $\{ ( i,j ) \in E | i \in A, j \in B \}$ of edges that have one endpoint in $A$ and the other endpoint in $B$. If $i$ and $j$ are specified vertices of the graph $G$, an $i$ $-$ $j$ cut is a cut in which $i$ belongs to the set $A$ , and $j$ belongs to the set $B$.

- In an Unweighted Undirected graph, the size or weight of a cut is the number of edges crossing the cut.

- In a Weighted graph, the value or weight is defined by the sum of the weights of the edges crossing the cut.

- A cut is minimum if the size or weight of the cut is not larger than the size of any other cut.

In other words, partitioning vertices of a graph into two disjoint subsets that are joined by at least one edge is minimum cut, which is 2 here in the example.

- A cut is maximum if the size of the cut is not smaller than the size of any other cut, which is 5 here in the example.

$cut ( A, B) = \sum_{i \in A, j \in B} w_{ij}$

Minimizing this is a combinatorial problem.

$ratiocut (A, \bar{A}) = \frac{cut( A, \bar{A})} {|A|} + \frac{cut( \bar{A}, A)} {| \bar{A}|}$

which result in a more balanced subgraph in terms of the size of each subgraph.

$f \in {R}^n$ $f_i = \left\{ \begin{array}{lr} \sqrt{\frac{| \bar{A}|}{| A|}} & i \in A\\ -\sqrt{\frac{| A|}{| \bar{A}|}} & i \in \bar{A} \end{array} \right.$

The previous function gives any point i that belongs to $A$ a positive value, while it gives any point j that belongs to $\bar{A}$ a negative number.

Ex. If $| A| = | \bar{A}|$ then the values for i will be +1 and values for j will be -1.

### Derivation

Problem: Prove that minimizing $\, \sum_{i,j} w_{ij} * ( f_i - f_j)^2$ is equivalent to minimizing ratiocut $( A, \bar{A})$

Proof:

We have 4 cases: 1) $i \in A \; and\; j \in A$ , 2) $i \in A \; and\; j \in \bar{A}$ , 3) $i \in \bar{A} \; and\; j \in A$ , and 4) $i \in \bar{A} \; and\; j \in \bar{A}$

$\sum_{i,j} w_{ij} * ( f_i - f_j)^2 = \sum_{ i \in A, j \in \bar{A}} w_{ij} * (\sqrt{\frac{| \bar{A}|}{| A|}} + \sqrt{\frac{| A|}{| \bar{A}|}}) ^ 2 + \sum_{ i \in \bar{A}, j \in A} w_{ij} * (- \sqrt{\frac{| A|}{| \bar{A}|}} - \sqrt{\frac{| \bar{A}|}{| A|}}) ^ 2$

since both $\sum_{ i \in A, j \in A} w_{ij} * ( f_i - f_j)^2 =0 \; and \; \sum_{ i \in \bar{A}, j \in \bar{A}} w_{ij} * ( f_i - f_j)^2 = 0$

$= \sum_{ i \in A, j \in \bar{A}} w_{ij} * (\frac{| \bar{A}|}{| A|} + \frac{| A|}{| \bar{A}|} + 2 * \sqrt {\frac{| \bar{A}|} {| A|} * \frac{| A|} {| \bar{A}|}}) + \sum_{ i \in \bar{A}, j \in A} w_{ij} * ( \frac{| A|}{| \bar{A}|} + \frac{| \bar{A}|}{| A|} + 2 * \sqrt {\frac{| A|} {| \bar{A}|} * \frac{| \bar{A}|} { | A|}})$

Note that $\frac{| \bar{A}|} {| A|} * \frac{| A|} {| \bar{A}|} = I$ and $2 = 1 + 1 = \frac{| A|} {| A|} + \frac {| \bar{A}|} {| \bar{A}|}$

$= \sum_{ i \in A, j \in \bar{A}} w_{ij} * (\frac{| \bar{A}|+| A|}{| A|} + \frac{| A| + | \bar{A}|}{| \bar{A}|}) + \sum_{ i \in \bar{A}, j \in A} w_{ij} * ( \frac{| A|+| \bar{A}|}{| \bar{A}|} + \frac{| \bar{A}| + | A|}{| A|} )$

$= (\frac{| \bar{A}|+| A|}{| A|} + \frac{| A| + | \bar{A}|}{| \bar{A}|}) * \sum_{ i \in A, j \in \bar{A}} w_{ij} + (\frac{| \bar{A}|+| A|}{| A|} + \frac{| A| + | \bar{A}|}{| \bar{A}|}) * \sum_{ i \in \bar{A}, j \in A} w_{ij}$

Note that $\sum_{ i \in A, j \in \bar{A}} w_{ij} = cut ( A, \bar{A})$ and $\sum_{ i \in \bar{A}, j \in A} w_{ij} = cut ( \bar{A}, A)$ .

Therefore, our main equation becomes:

$(\frac{| \bar{A}|+| A|}{| A|} + \frac{| A| + | \bar{A}|}{| \bar{A}|}) * cut ( A, \bar{A}) + (\frac{| \bar{A}|+| A|}{| A|} + \frac{| A| + | \bar{A}|}{| \bar{A}|}) * cut ( \bar{A}, A)$

Since $cut( A, \bar{A}) = cut( \bar{A}, A)$, then:

$(\frac{| \bar{A}|+| A|}{| A|} + \frac{| A| + | \bar{A}|}{| \bar{A}|}) * 2 * cut ( A, \bar{A})$

Let $n = | \bar{A}| + | A|$, then:

$(\frac{ n}{| A|} + \frac{ n}{| \bar{A}|}) * 2 * cut ( A, \bar{A})$

$= 2 * n * (\frac{cut( A, \bar{A})}{| A|} + \frac{cut( \bar{A}, A)}{| \bar{A}|})$

$= 2 * n * ratiocut( A, \bar{A})$

Since n is always non-negative, minimizing ratiocut is equivalent to minimizing 2 * n * ratiocut. Therefore, minimizing $\, \sum_{i,j} w_{ij} * ( f_i - f_j)^2$ is equivalent to minimizing ratiocut $( A, \bar{A})$. This completes the proof.

Problem: find an appropriate relaxation of the optimization problem.

Define $\, {W = [w_{ij}]_{n*n}}$ to be the matrix of all weights.

${W = \begin{bmatrix} w_{1,1} & w_{1,2} & \cdots & w_{1,n} \\ w_{2,1} & w_{2,2} & \cdots & w_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ w_{n,1} & w_{n,2} & \cdots & w_{n,n} \end{bmatrix}}$

Define $D$ to be a diagonal nxn matrix such that:

$\, {d_i = \sum_j w_{ij}}$

${D = \begin{bmatrix} d_{1} & 0 & \cdots & 0 \\ 0 & d_{2} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & d_{n} \end{bmatrix}}$

Define $L$ is a ${n*n}$ matrix such that $L = D - W$, $L$ is called the Laplacian of the graph.

Show: $\, {2 f^TLf = \sum_{ij} w_{ij} (f_i - f_j)^2}$

Proof:

$\, {L}$ and $\, {w_{ij}}$ are known.

$\, {L.H.S = 2 f^T L f = 2 f^T (D-W) f = 2 f^T D f - 2 f^T W f}$

Note that $\, {f^T D f}$ can be written as ${\sum_i d_i f_i^2}$ and $\, {f^T W f}$ can be written as $\, {\sum_{i,j} w_{ij} f_i f_j}$

Recall that we want to reach to the quadratic form in the right hand side ${(f_i- f_j)^2 = f_i^2 - 2f_i f_j + f_j^2}$

${L.H.S = \sum_i d_i f_i^2 + \sum_{i} d_i f_i^2 - 2 \sum_{i,j} w_{ij} f_i f_j}$

By changing the $i$ in the second term into $j$

${L.H.S = \sum_i d_i f_i^2 + \sum_{j} d_j f_j^2 - 2 \sum_{i,j} w_{ij} f_i f_j = \sum_i d_i f_i^2 - 2 \sum_{i,j} w_{ij} f_i f_j + \sum_{j} d_j f_j^2}$

By substituting the value of $d_i$

${L.H.S = \sum_{i,j} w_{ij} f_i^2 - 2 \sum_{i, j} w_{ij} f_i f_j + \sum_{i,j}w_{ij}f_j^2 =\sum_{i,j} w_{ij} (f_i - f_j)^2 = R.H.S}$

Which means minimizing the R.H.S minimizes the L.H.S

Therefore our goal is to solve the following optimization problem:

$\, \text{min}_f( {f^T L f})$

$\, s.t. {f^T f = n}$

instead of solving the more difficult ratiocut optimization problem.

Our new problem can be solved by using Lagrange multiplier:

$\, {L(f, \lambda) = f^T L f - \lambda (f^Tf -n)}$

${\frac{\delta L}{\delta f} = 2 Lf - 2 \lambda f = 0}$

$\, { Lf = \lambda f}$

[9] Since a graph with one connected component has only one constant eigenvector ${u = 1_n}$ that is associated with ${0}$ eigenvalue. And as we can see from the previous equation that ${f}$ is the eigenvector of ${L}$. So ${f}$ is the eigenvector corresponding to the second smallest eigenvalue, wit positive elements corresponds to one class and negative elements correspond to the other one.

### Spectral Clustering Results

The following figures show [10]the results of the spectral clustering against other clustering algorithms. As clustering is considered as an ill-defined problem it's hard to say that one algorithm groups the points better than other algorithms. However, in the figures below, spectral clustering groups the points as a person may.

The most notable issue with spectral clustering is its dependency on the definition of "similar" and/or "dissimilar". For example, suppose a class of students were to be cut into two groups. If weights were given to pairs of students according to the similarity of their gender, then the two groups would presumably be separated into males and females. However, this weighting doesn't account for other similarities the class might have such as colour of clothing worn that day, or less visible similarities such as number of hours slept the previous night. The definition of "similar" is therefore what drove the cut, and not the actual similarities between the students. Spectral clustering is a problem that only has subjective solutions due to the subjective nature of the problem.

### Multiclass Spectral Clustering

Spectral clustering can be used in multiclass case by choosing the smallest $k-1$ eigenvectors after ignoring the smallest one (associated with $0$ eigenvalue, this will result in a matrix of size $n * (k-1)$ which can be used as a covariate of other clustering algorithm.

### Laplacian Eigenmap

Laplacian eigenmap [11] solves the problem of dimensionality reduction by obtaining the eigenvectors that correspond to the lowest ${d+1}$ eigenvalues and project the data to these eigenvectors. This is very similar to the spectral clustering.

We can interpret the multiclass spectral clustering as a dimensionality reduction step before applying a clustering technique.

Also spectral clustering can be interpreted from Random walk point of view as explained here [12]: "Random walk is a stochastic process on a graph which moves randomly from node to node. So spectral clustering tries to find a graph partition such that random walk stays long within a cluster and with seldom moves from one cluster to another."

A side note: The frobenius norm of the covariance matrix can measure the linear correlation between two variables, the larger the norm the larger the correlation. And this is true only for linear correlation as for the nonlinear correlation data should be mapped to Hilbert space first before applying the covariance operator and obtain the Schmidt norm.

The following is a matlab sample code that shows the norm of a covariance matrix for uncorrelated, linearly correlated, non-linearly correlated random variables.

>> X1 = randn(1, 10000);
>> X2 = randn(1, 10000);
>> X = [X1;X2];
>> cov(X')
>> norm(cov(X'))  % small as X1 and X2 are independent
>> X2 = 2 * X1;
>> X = [X1;X2];
>> cov(X')
>> norm(cov(X')) % large as X1 and X2 are linearly dependent
>> X2 = sin(X1);
>> X = [X1;X2];
>> cov(X')
>> norm(cov(X')) % not large as the dependency between X1 and X2 is not linear.


## Supervised Principal Component Analysis (Lecture 9: Oct. 15, 2014)

### Assignment 1 Solutions

Overall the first assignment was done quite well. The average mark for undergrads was approximately 70%. This lecture will focus on the details of the first assignment and going over the solutions. The solutions will not be posted online.

Question 1 was done quite well. Some marks were deducted for unusual scaling of eigenvalues (division by (n-1)). In general it is a good idea to scale the eigenvalues, however, it will usually not affect whether the PCA function works or not. Marks were given simply for having a picture for most of the questions. Note that the explanations for the questions should have some detail, one-word answers are not sufficient.

Question 2 part(a) has some issues. Some students plot too many eigenvectors. Also in general, the fewer PCs we keep, the better.

Question 3 was very well done, as long as you had some pictures, you would have gotten most of the marks.

Question 4 had some issues. These are very well-known properties, so you should prove it using the definitions and show more detailed steps.

### Supervised Principal Component Analysis Introduction

Principal component analysis (PCA) may not be the ideal for processing multiple sets of data at once or data with different classes of characteristics, since the principal component which maximize variation may not preserve these characteristics/classes. As such, maximum variance in dimensionality reduction is sometimes not the best way to process data. We can incorporate side data and use labels in PCA in order to create a better dimensionality reduction. In these cases, we are more interested in finding subspaces which respect the labels of the original data, instead of those that maximizes the variance.

For example, a data set containing pictures of different faces can be divided into glasses/no glasses classes, or beards/no beards classes. KPCA may be able to distinguish some pre-specified features in the picture, while PCA could only give components that have max variance.

### Problem Statement

Given a set of data $(X,Y),\; X \in \mathbb{R}^p$ are p-dimensional explanatory variates and $Y\in \mathbb{R}^l$ l-dimensional response variables, the goal is to estimate a low-dimension subspace S such that S contains as much predictive information about the response variable Y as the original explanatory variable X.

Alternatively, given an i.i.d (independent and identically distributed) sample $\{(x_1,y_1), (x_2,y_2) \dots (x_N,y_N)\}$, we are looking for an orthogonal projection of X onto $S=U^T X$ such that Y depends mainly on $U^T X$.

### Related Work

There are many families of algorithms to solve this problem, some of which are briefly described below.

#### Classical Fisher's discriminant analysis (FDA)

This algorithm is very famous, however it will not be covered in this class. This[13] is a brief introduction to FDA.

#### Metric Learning (ML)

Metric learning computes the Mahalanobis distance instead of Euclidean distance over the input space. The Mahalanobis distance is defined as

$d_A(x_i,x_j) = \|x_i,x_j\|_A = \sqrt{(x_i - x_j)^T A (x_i-x_j)}$ where $A$ is a symmetric positive semi-definite matrix.

The matrix $A$ changes the definition of distance/similarity. The problem is then reduced to finding $A$.

The algorithm learns a linear transformation of the original inputs followed by Euclidean distance in the projected space. To see this, we know that $A$ is symmetric and positive semi-definite, and we can write it as $A=UU^T$ using Cholesky decomposition. Then $d_A(x_i,x_j) = \sqrt{(x_i - x_j)^T A (x_i-x_j)} = \sqrt{(x_i - x_j)^T UU^T (x_i-x_j)} = \sqrt{(U^T x_i - U^T x_j)^T (U^T x_i- U^T x_j)}$ which represents the Euclidean distance of $U^T x_i$ and $U^T x_j$.

#### Sufficient Dimension Reduction (SDR)

In sufficient dimension reduction, we attempt to find a linear subspace $S \subset X$ such that $P_{Y\mid X}(y\mid X)$ is preserved.

i.e $P_{Y\mid X}(y\mid X) = P_{Y\mid U^TX}(y\mid U^TX)\; \forall x,y$

#### Supervised Principal Components (BSPC)

• This version of Supervised Principal Components is different from the the Supervised Principal Component Analysis that we are going to talk about in this lecture. It is proposed by Bair er. al., and the original paper can be found here[14].
• This method is commonly used in biostatistics, but it's different from the one we are covering this course.
##### Steps

1. Recall that in linear regression the coefficient for feature j is computed as:

$s_j = \dfrac{X_j^T Y}{\sqrt{X^T_jX_j}}$

Note that $X$ doesn't have to be centered. This is essentially a measure of similarity between feature j (row j) and output Y.

2. Replace the data matrix so that it only consists of features for whom the absolute value of the calculated coefficients exceeds some determined threshold $\theta$.

3. Compute the first (few) principal components of the reduced data matrix.

4. Use the principal component(s) calculated in step 3 in a regression model or classification algorithm to predict the outcome.

### Hilbert-Schmidt Independence Criterion (HSIC)

Two random variables $X\in \mathbb{R}^p$ and $Y\in \mathbb{R}^q$ are independent if and only if any bounded continuous function of them is uncorrelated.

If we map these random variables to reproducing kernel Hilbert spaces (RFHS), we can test for correlation.

To do so, we define a RKHS $F:X\to \mathbb{R}$ containing all continuous bounded real-valued functions of x,

and a RKHS $G:Y\to \mathbb{R}$ containing all continuous bounded real-valued functions of y.

Let $Z:= \{(x_1,y_1),\dots (x_n,y_n) \} \subseteq X\times Y$ be n independent observations from $\rho_{X,Y}$.

An estimator of HSIC is:

$HSIC:= \dfrac{1}{(n-1)^2}Tr(KHBH)\; \; where\; H,K,B \in \mathbb{R}^{n\times n},$

$K_{ij} = K(x_i,x_j)$ is a kernel of X,

$B_{ij} = b(x_i,x_j)$ is a kernel of Y,

and $H = I - \frac{1}{n}ee^T$ is the centering matrix.

The HSIC is 0 if and only if X and Y are independent. Thus the estimator described above can be used to test for independence. For more details, see [15].

If $X \sim P, Y \sim Q$ where P and Q are probability distributions, we can calculate the distance between distributions P and Q as follows:

$\mid P-Q \mid^2 = \mid \frac{1}{n}\sum \phi(x_i)-\frac{1}{m}\sum\phi (y_i) \mid^2$

$= (\frac{1}{n}\sum \phi(x_i)-\frac{1}{m}\sum\phi (y_i))^T (\frac{1}{n}\sum \phi(x_i)-\frac{1}{m}\sum\phi (y_i))$

$= \frac{1}{n^2} \sum K(x_i,x_j)+\frac{1}{m^2}\sum K(y_i,y_j) -\frac{2}{mn} \sum K(x_i,y_j)$

The idea of this measure of distance is counter-intuitive at first glance, since it states that after mapping into high dimension space using $\phi(x)$, we can measure the distance of distribution by using the mean of the sample data points. It's difficult to find an explicit $\phi(x)$, but we can use kernel trick to solve the problem.

Now that we have a way to measure the distance between two distributions, we have a way to test for independence since:

$X, Y\; independent \Rightarrow P_{X,Y}(x,y) = P_X(x)P_Y(y) \Rightarrow \mid P_{X,Y}(x,y)-P_X(x)P_Y(y) \mid = 0$