# stat946s13

## Sing up for paper presentation

Sign up for your presentation in the following table. Put your name and a link to the paper that you are going to present. Chose a date between July 16 and July 30 (inclusive).

 Date Presentation (1) Presentation (2) July 16 Ahmed Elgohary [1] Summary name [link], Summary July 23 Jiaxi Liang [2] Summary Lu Xin [link], Summary July 25 Fahmida Homayra[link]Summary M.Hassan Z.Ashtiani [link], Summary July 30 Huan Cheng[link]Summary

## Set B

 Name Second paper (The paper that you are going to write a critic on it. This is different from the paper that you have chosen for presentation.) Ali Ghodsi (This is an example A New Approach to Collaborative Filtering: Operator Estimation with Spectral Regularization [3] Summary

# Background

## Introduction

Manifold learning is a significant problem across a wide variety of information processing fields including pattern recognition, data compression, machine learning, and database navigation. In many problems, the measured data vectors are high-dimensional but we may have reason to believe that the data lie near a lower-dimensional manifold. In other words, we may believe that high-dimensional data are multiple, indirect measurements of an underlying source, which typically cannot be directly measured. Learning a suitable low-dimensional manifold from high-dimensional data is essentially the same as learning this underlying source.

Dimensionality reduction <ref>In this thesis ‘manifold learning’ and ‘dimensionality reduction’ are used interchangeably. </ref> can also be seen as the process of deriving a set of degrees of freedom which can be used to reproduce most of the variability of a data set. Consider a set of images produced by the rotation of a face through different angles. Clearly only one degree of freedom is being altered, and thus the images lie along a continuous curve through image space.

Manifold learning techniques can be used in different ways including:

•  : Produces a compact low-dimensional encoding of a given high-dimensional data set.
•  : Provides an interpretation of a given data set, usually as a by-product of data dimensionality reduction.
•  : Unsupervised methods for data dimensionality reduction are used as a preprocessing step in order to simplify subsequent training of a supervised method such as classification.

Many algorithms for dimensionality reduction have been developed to accomplish these tasks. However, since the need for such analysis is raised in many areas of study, contributions to the field have come from many disciplines. While all of these methods have a similar goal, approaches to the problem are different.

Principal components analysis (PCA) is a classical method which provides a sequence of best linear approximations to a given high-dimensional observation. It is one of the most popular techniques for dimensionality reduction. However, its effectiveness is limited by its global linearity. Multidimensional scaling (MDS) , which is closely related to PCA, suffers from the same drawback. Factor analysis and independent component analysis (ICA) also assume that the underling manifold is a linear subspace. However, they differ from PCA in the way they model the subspace.

The subspace modeled by PCA captures the maximum variability in the data, and can be viewed as modeling the covariance structure of the data, whereas factor analysis models the correlation structure. ICA starts from a factor analysis solution and searches for rotations that lead to independent

components . In order to resolve the problem of dimensionality reduction in nonlinear cases, many techniques including kernel PCA , locally linear embedding (LLE) , Laplacian Eigenmaps Method (LEM) , Isomap , and Semidefinite Embedding (SDE) have been proposed.

This chapter provides a brief overview of these different approaches and shows their close connection. Section 2 of this chapter explains Principal components analysis which is the core of many other techniques. In Section 3, kernel PCA, a recent extension to PCA, is discussed. Locally linear embedding, a new and very popular algorithm, is reviewed in Section 4. Section 5 explains Laplacian Eigenmaps. Multidimensional scaling and its recent extension, Isomap, are discussed in Sections 6 and 7 respectively. The last section discusses Semidefinite Embedding, a new approach to dimensionality reduction based on Semidefinite programming.

## Principal components analysis

Principal components analysis (PCA) is a very popular technique for dimensionality reduction. Given a set of data on $n$ dimensions, PCA aims to find a linear subspace of dimension lower than $n$ such that the data points lie mainly on this linear subspace. Such a reduced subspace attempts to maintain most of the variability of the data.

The linear subspace can be specified by $d$ orthogonal vectors which form a new coordinate system and are called the ‘principal components’. The principal components are orthogonal, linear transformations of the original data points, so there can be no more than $n$ of them. However, the hope is that only $d \lt n$ principal components are needed to approximate the space spanned by the $n$ original axes.

The most common definition of PCA, due to Hotelling , is that, for a given set of data vectors ${x_i}$, $i \in {1 ... t}$, the $d$ principal axes are those orthonormal axes onto which the variance retained under projection is maximal.

In order to capture as much of the variability as possible, let’s choose the first principal component, denoted by $U_1$, to have maximum variance. Suppose that all centered observations are stacked into the columns of an $n \times t$ matrix $X$, where each column corresponds to an $n$ dimensional observation and there are $t$ observations. Let principal component be a linear combination of $X$ defined by coefficients (or weights) $W=[w_1 ... w_t]$. In matrix form:

$U_1=W^T X$ $var(U_1)=var(W^T X)= W^T S W$

where $S$ is the $t \times t$ sample covariance matrix of $X$.

Clearly $var(U_1)$ can be made arbitrarily large by increasing the magnitude of $W$. Therefore, we choose $W$ to maximize $W^T S W$ while constraining $W$ to have unit length.

\begin{align} \max~W^T S W\\ subject~to~W^T W = 1\end{align}

To solve this optimization problem a Lagrange multiplier $\alpha_1$ is introduced:

\begin{align} L(W, \alpha)= W^T S W - \alpha_1 ( W^T W - 1) \label{1}\end{align}

Differentiating with respect to $W$ gives $t$ equations,

$SW= \alpha_1W$

Premultiplying both sides by $W^T$ we have:

$W^T S W = \alpha_1 W^T W = \alpha_1$

$var(U_1)$ is maximized if $\alpha_1$ is the largest eigenvalue of $S$.

Clearly $\alpha_1$ and $W$ are an eigenvalue and an eigenvector of $S$. Differentiating ([1]) with respect to the Lagrange multiplier $\alpha_1$ gives us back the constraint:

$W^T W =1$

This shows that the first principal component is given by the normalized eigenvector with the largest associated eigenvalue of the sample covariance matrix $S$. A similar argument can show that the $d$ dominant eigenvectors of covariance matrix $S$ determine the first $d$ principal components.

Another nice property of PCA, closely related to the original discussion by Pearson , is that the projection onto the principal subspace minimizes the squared reconstruction error, $\sum_{i=1}^t ||x_i- \hat{x}_i||^2$. In other words, the principal components of a set of data in $\Re^n$ provide a sequence of best linear approximations to that data, for all ranks $d \le n$

Consider the rank-$d$ linear approximation model as :

$f(y) =\bar{x} + U_d y$

This is the parametric representation of a hyperplane of rank $d$.

For convenience, suppose $\bar{x} = 0$ (otherwise the observations can be simply replaced by their centered versions $\tilde {x} = x_i - \bar{x}$). Under this assumption the rank $d$ linear model would be $f(y) = U_d y$, where $U_d$ is a $n \times d$ matrix with $d$ orthogonal unit vectors as columns and $y$ is a vector of parameters. Fitting this model to the data by least squares leaves us to minimize the reconstruction error:

\begin{align} \min_{U_d,y_i} \;\sum_i^{t} || x_i -U_dy_i||^2\end{align}

By partial optimization for $y_i$ we obtain:

$\frac{d}{d y_i} \Rightarrow y_i=U_d ^T x_i$

Now we need to find the orthogonal matrix $U_d$:

\begin{align} \min_{U_d} \; \sum_i^{t} || x_i -U_dU_d^T x_i||^2\end{align}

Define $H_d= U_d U_d^T$. $H_d$ is a $n \times n$ matrix which acts as a projection matrix and projects each data point $x_i$ onto its rank $d$ reconstruction.

In other words, $H_d x_i$ is the orthogonal projection of $x_i$ onto the subspace spanned by the columns of $U_d$. A unique $H^+$ solution can be obtained by finding the pseudo inverse of $X$, denoted as $X^+$.

$H^+ = X^+ X$ $X= U \Sigma V^T$ $X^+ = V \Sigma^+ U^T$ $H^+= U \Sigma V^T V \Sigma^+ U^T =UU^T$ For each rank $d$, $U_d$ consists of the first $d$ columns of $U$.

[h] [alg 1]

Clearly the solution for $U$ can be expressed as singular value decomposition (SVD) of $X$. $X=U \Sigma V^T$ since the columns of $U$ in the SVD contain the eigenvectors of $XX^T$. The PCA procedure is summarized in Algorithm 1.

## Kernel PCA

Through the use of kernels, principle components can be computed efficiently in high-dimensional feature spaces that are related to the input space by some nonlinear mapping. PCA is an orthogonal transformation of the coordinate system in which we describe our data.

Kernel PCA finds principal components which are nonlinearly related to the input space. PCA can be formulated entirely in terms of dot products between data points. In kernel PCA, this dot product is replaced by the inner product of a Hilbert space. This is equivalent to performing PCA in the space produced by the nonlinear mapping, where the low-dimensional latent structure is, hopefully, easier to discover.

Consider a feature space $\mathcal{H}$ such that:

$\Phi : x \rightarrow \mathcal{H} , x \mapsto \Phi(x)$

Suppose $\sum_i^t \Phi(x_i) = 0$ (we will return to this point and show how this condition can be satisfied in Hilbert space).

This allows us to formulate the kernel PCA objective as follows:

$\min~ \sum_i^t || \Phi(x_i)- U_qU_q^T \Phi(x_i) ||$

By the same argument used for PCA, the solution can be found by SVD: $\Phi(X) = U \Sigma V^T$

where $U$ contains the eigenvectors of $\Phi(X) \Phi(X)^T$

However, the singular value decomposition allows us to do much more than simply rederive the principle components algorithm. In fact, given the matrices $\Sigma$ and $V$, one can derive a dual form of principle components analysis which allows us to limit the direct dependence on the original dimensionality $n$, via the kernel trick.

Assume that the dimensionality $n$ of the $n\times t$ matrix of data $X$ is large (i.e. $n\gt \gt t$). In this case, Algorithm 1 is impractical. We would prefer a run time that depends only on the number of training examples $t$, or that at least has a reduced dependence on $n$.

To reduce the dependence on $n$, first assume that we have a kernel $k(\cdot,\cdot)$ that allows us to compute $k(x,y)=x^\top y$. Given such a function, we can then compute the matrix $X^\top X =K$, such that $k_{ij}=k(x_i,x_j)$. Let $[X^\top X]$ denote the fact that we could compute the matrix $X^\top X$ efficiently using the kernel trick.

The eigenvectors in $U$ corresponding to nonzero singular values in $\Sigma$ (square roots of eigenvalues) are in a one-to-one correspondence with the eigenvectors in $V$.

Now assume that we perform dimensionality reduction on $U$ and keep only the first $d$ eigenvectors, corresponding to the top $d$ nonzero singular values in $\Sigma$. These eigenvectors will still be in a one-to-one correspondence with the first $d$ eigenvectors in $V$: $X\;V \;\;=\;\; U\;\Sigma$ where the dimensions of these matrices are: $\begin{array}{cccc} X & U & \Sigma & V \\ n\times t & n\times d & d\times d & t\times d \\ & & \mbox{diagonal} \end{array}$ Crucially, $\Sigma$ is now square and invertible, because its diagonal has nonzero entries. Thus, the following conversion between the top $d$ eigenvectors can be derived:

\begin{align} U&=&X\;V\;\Sigma^{-1}\end{align}

Replacing all uses of $U$ in Algorithm 1 with $XV\Sigma^{-1}$ gives us the dual form of PCA, Algorithm 2 (see Figure [alg 2]).

[h] [alg 2]

In the derivation of the kernel PCA we assumed that $\Phi(x)$ has zero mean. The following normalization of the kernel satisfies this condition.

$\tilde{k}(x,y)= k(x,y)-E_x[k(x,y)]-E_y[k(x,y)]+E_x[E_y[k(x,y)]]$

In order to prove that, define:

$\tilde{\Phi}(X) = \Phi(X) - E_x[\Phi(X)]$

Finally, the corresponding kernel is:

$\tilde{k}(x,y)=\tilde{\Phi}(x) \tilde{\Phi(y)}$

This expands as follows:

$\tilde{k}(x,y)= (\Phi(x) - E_x[\Phi(x)]) . (\Phi(y) - E_y[\Phi(y)])$ $=k(x,y)-E_x[k(x,y)]-E_y[k(x,y)]+E_x[E_y[k(x,y)]]$

## Locally Linear Embedding

[LLE] Locally linear embedding (LLE), computes low-dimensional, neighborhood preserving embedding of high-dimensional data. A data set of dimensionality $n$, which is assumed to lie on or near a smooth nonlinear manifold of dimensionality $d \lt n$, is mapped into a single global coordinate system of lower dimensionality, $d$. The global nonlinear structure is recovered by locally linear fits.

Consider $t$ $n$-dimensional real-valued vectors $x_i$ sampled from some underlying manifold. We can assume each data point and its neighbors lie on, or are close to, a locally linear patch of the manifold. By a linear mapping, consisting of a translation, rotation, and rescaling, the high-dimensional coordinates of each neighborhood can be mapped to global internal coordinates on the manifold. Thus, the nonlinear structure of the data can be identified through two linear steps: first, compute the locally linear patches, and second, compute the linear mapping to the coordinate system on the manifold.

The main goal here is to map the high-dimensional data points to the single global coordinate system of the manifold such that the relationships between neighboring points are preserved. This proceeds in three steps:

1. Identify the neighbors of each data point $x_i$. This can be done by finding the $k$ nearest neighbors, or by choosing all points within some fixed radius, $\epsilon$.
2. Compute the weights that best linearly reconstruct $x_i$ from its neighbors.
3. Find the low-dimensional embedding vector $y_i$ which is best reconstructed by the weights determined in the previous step.

After finding the nearest neighbors in the first step, the second step must compute a local geometry for each locally linear patch. This geometry is characterized by linear coefficients that reconstruct each data point from its neighbors.

$\min_{W}\sum_{i=1}^{t}||\mathbf{x}_{i}-\sum_{j=1}^{k}W_{ij}\mathbf{x}_{N_{i}(j)}||^{2}$ where $N_{i}(j)$ is the index of the $j$th neighbor of the $i$th point. It then selects code vectors so as to preserve the reconstruction weights by solving

$\min_{Y}\sum_{i=1}^{t}||\mathbf{y}_{i}-\sum_{j=1}^{k}W_{ij}\mathbf{y}_{N_{i}(j)}||^{2}$ This objective can be reformulated as

\begin{align} \min_{Y}\textrm{Tr}(Y^{T}YL) \label{LLECOST}\end{align}

where $L=(I-W)^{T}(I-W)$.

The solution for $Y$ can have an arbitrary origin and orientation. In order to make the problem well-posed,these two degrees of freedom must be removed. Requiring the coordinates to be centered on the origin ($\sum_i y_i =0$), and constraining the embedding vectors to have unit covariance ($Y^T Y=I$), removes the first and second degrees of freedom respectively.

The cost function can be optimized initially by the second of these two constraints. Under this constraint, the cost is minimized when the columns of $Y^T$ (rows of $Y$) are the eigenvectors associated with the lowest eigenvalues of $L$.

Discarding the eigenvector associated with eigenvalue 0 satisfies the first constraint.

## Laplacian Eigenmaps

[LEM] Given $t$ points in $n$-dimensional space, Laplacian Eigenmaps Method (LEM) starts by constructing a weighted graph with $t$ nodes and a set of edges connecting neighboring points. Similar to LLE, the neighborhood graph can be constructed by finding the $k$ nearest neighbors, or by choosing all points within some fixed radius $\epsilon$. For weighting the edges, there are two variations: either each edge is weighted by $W_{ij}=e ^{-\frac{||x_i-x_j||^2}{s}}$, where $s$ is a free parameter which should be chosen a priori, or simply all $W_{ij}$ is set to $1$ if vertices $i$ and $j$ are connected. The embedding map is then provided by the following objective

$\min_{Y}\sum_{i=1}^{t}\sum_{j=1}^{t}(\mathbf{y}_{i}-\mathbf{y}_{j})^{2}W_{ij}$ subject to appropriate constraints. This objective can be reformulated as

$\min_{Y} \textrm{Tr} (Y L Y^{T})$ where $L=R-W$, $R$ is diagonal, and $R_{ii}=\sum_{_{j=1}}^{t}W_{ij}$. This $L$ is called the Laplacian function. Similar to [LLECOST], after adding orthogonality and centering constraint, a solution to this problem can be found by making $Y$ to be the eigenvectors of $L$(non-normalized solution). As an alternative, [LLECOST] can be constrained to $Y^T L Y=I$. In this case , the solution is provided by the eigenvectors of the generalized eigenvalue problem $My =\lambda D y$ (normalized solution). Note that the final objectives for both LEM and LLE have the same form and differ only in how the matrix $L$ is constructed. Therefore, same closed form solution (taking $Y$ to be the eigenvectors of $L$) works.

## Multidimensional scaling (MDS)

[MDS] Multidimensional scaling (MDS) is another classical approach that maps the original high dimensional space to a lower dimensional space that preserves pairwise distances. MDS addresses the problem of constructing a configuration of $t$ points in Euclidean space by using information about the distances between the $t$ patterns.

A $t \times t$ matrix $D$ is called a distance or affinity matrix if it is symmetric, $\verb"d"_{ii} = 0$, and $~~\verb"d"_{ij} \gt 0, ~~ i \neq j$.

Given a distance matrix $D$, MDS attempts to find $t$ data points $y_1, ..., y_t$ in $d$ dimensions, such that if $\hat{d}_{ij}$ denotes the Euclidean distance between $y_i$ and $y_j$, then $\hat{D}$ is similar to $D$. In particular, we consider metric MDS , which minimizes

\begin{align} \min_Y \sum_{i=1}^t \sum_{i=1}^t (\verb"d"_{ij}^{(X)}-\verb"d"_{ij}^{(Y)})^2 \label{20}\end{align}

where $\verb"d"_{ij}^{(X)} = ||x_i-x_j||$ and $\verb"d"_{ij}^{(Y)} = ||y_i-y_j||$. The distance matrix $D^{(X)}$ can be converted to inner products $X^TX$. $X^T X = -\frac{1}{2} H D^{(X)H}$ where $H=I-\frac{1}{t}ee^T$ and $e$ is a column vector of all $1$. Now ([20]) can be reduced to

\begin{align} \min_Y \sum_{i=1}^t \sum_{i=1}^t (x_i^T x_j-y_i^Ty_i)^2 \label{(21)}\end{align}

It can be shown that the solution is $Y=\Lambda^{1/2} V^T$ where $V$ is the eigenvectors of $X^TX$ corresponding to the top $d$ eigenvalues, and $\Lambda$ is the top $d$ eigenvalues of $X^TX$. Clearly the solution for MDS is identical to dual PCA (see Figure [alg 2]), and as far as Euclidean distance is concerned, MDS and PCA produce the same results. However, the distances need not be based on Euclidean distances and can represent many types of dissimilarities between objects.

## Isomap

Another recent approach to nonlinear dimensionality reduction is the Isomap algorithm. Isomap is a nonlinear generalization of classical MDS. The main contribution is to compute the MDS, not in the input space, but in the geodesic space of the manifold. The geodesic distances represent the shortest paths along the curved surface of the manifold measured as if the surface were flat. This can be approximated by a sequence of short steps between neighboring sample points. Isomap then applies MDS to the geodesic distances to find a low-dimensional mapping with similar pairwise distances.

Like LLE, the Isomap algorithm proceeds through three steps:

1. Find the neighbors of each data point in high-dimensional data space.
2. Compute the geodesic pairwise distances between all points.
3. Embed the data via MDS so as to preserve these distances.

Again like LLE, the first step can be performed by identifying the $k$ nearest neighbors, or by choosing all points within some fixed radius, $\epsilon$. These neighborhood relations are represented by a graph $G$ in which each data point is connected to its nearest neighbors, with edges of weight $d_X(i,j)$ between neighbors.

The geodesic distances $d_M(i,j)$ between all pairs of points on the manifold $M$ are then estimated in the second step. Isomap approximates $d_M(i,j)$ as the shortest path distance $d_G(i,j)$ in the graph $G$. This can be done in different ways including Dijkstra’s algorithm and Floyd’s algorithm .

These algorithms finds matrix of graph distances $D^{(\mathcal{G})}$ contains the shortest path distance between all pairs of points in $G$.

In its final step, Isomap applies classical MDS to $D^{(\mathcal{G})}$ to generate an embedding of the data in a $d$-dimensional Euclidean space $Y$.

The global minimum of the cost function is obtained by setting the coordinates of $y_i$ to the top $d$ eigenvectors of the inner-product matrix $B$ obtained from $D^{(\mathcal{G})}$

## Semidefinite Embedding (SDE)

[SDE]

In 2004 Weinberger and Saul introduced Semidefinite Embedding (SDE) , which learns a kernel matrix instead of choosing a kernel function a priori. They formulated the problem of learning the kernel matrix as an instance of semidefinite programming. Since the kernel matrix $K$ represents inner products of vectors in a Hilbert space it must be positive semidefinite. Also the kernel should be centered, $\sum_{ij} K_{ij} = 0$. Lastly, SDE imposes constraints on the kernel matrix to ensure that the distances and angles between points and their neighbors are preserved under the neighborhood graph $\eta$. That is, if both $x_i$ and $x_j$ are neighbors ($\eta_{ij}=1$) or are common neighbors of another input ($[\eta^T \eta]_{ij} \gt 0$), then: $||\Phi(x_i)-\Phi(x_j)||^2 = ||x_i - x_j||^2.$ In terms of the kernel matrix, this can be written as: $K_{ij}-2K_{ij}+K_{jj} = ||x_i - x_j||^2.$ By adding an objective function to maximize $\textrm{Tr}(K)$ which represents the variance of the data points in the learned feature space, SDE constructs a semidefinite program for learning the kernel matrix $K$. The last detail of SDE is the construction of the neighborhood graph $\eta_{ij}$. This graph is constructed by connecting the $k$ nearest neighbors using a similarity function over the data, $||x_i - x_j||$. The algorithm is summarized in Table [tab:sde].

[h] [tab:sde]

<references />