# Difference between revisions of "sparse PCA"

## Introduction

In PCA, Given $n$ observations on $d$ variables (or in other words $n$ $d$-dimensional data points), our goal is to find directions in the space of the data set that correspond to the directions with biggest variance in the input data. In practice each of the $d$ variables has its own special meaning and it may be desirable to come up with some directions, as principal components, each of which is a combination of just a few of these variables. This makes the directions more interpretable and meaningful. But this is not something that usually happens as the original result of PCA method. Each of resulting directions from PCA in most cases is a linear combination of all variable with no zero coefficients.

To address the above concerns we add a sparsity constraint to the PCA problem, which makes the PCA problem much harder to solve. That's because we have just added a combinatorial constraint to optimization problem. This paper is showing us how to find directions in the data space with maximum variance that have a limited number of non-zero elements. In other words, this helps us to perform feature selection, by selecting a subset of features in each direction.

## Contribution

In this paper, a direct approach (called DSPCA) that improves the sparsity of the principle components is presented. This is done in 2 stages. First, incorporating a sparsity criterion in the PCA formulation. Second, forming a convex relation of the problem that is a semidefinite program. For small problems, semidifinite programs can be solved via general purpose interior-point methods. However, these methods can not be used for high dimensional problems. In this case, a saddle point problem can express our particular problem. For this kind of problems, smoothing argument algorithms combined with an optimal first-order smooth minimization algorithm offer a significant reduction in computational time and therefore can be used instead of generic interior point SDP solvers.

## Notation

The following notations are used in this note.

$S^n \,$ is the set of symmetric matrices of size $n \,$.
$\textbf{1} \,$ is a column vector of ones.
$\textbf{Card}(x) \,$ denotes the cardinality (number of non-zero elements) of a vector $x \,$
$\textbf{Card}(X) \,$ denotes the cardinality (number of non-zero elements) of a matrix $X \,$
For $X \in S^n \,$, $X \succeq 0 \,$ means $X \,$ is positive semi-definite.
$|X| \,$ is the matrix whose elements are the absolute values of the elements of $X \,$

## Problem Formulation

Given the covariance matrix $A$, the problem can be written as:

 $\begin{array}{ll} \textrm{maximize}& x^TAx\\ \textrm{subject\ to}& ||x||_2=1\\ &\textbf{Card}(x)\leq k \end{array}$ (1)

The cardinality constraint makes this problem hard (NP-hard) and we are looking for a convex and efficient relaxation.

Defining $X=x^Tx$, the above formula can be rewritten as

 $\begin{array}{ll} \textrm{maximize}& \textbf{Tr}(AX)\\ \textrm{subject\ to}&\textbf{Tr}(X)=1\\ &\textbf{Card}(X)\leq k^2\\ &X\succeq 0, \textbf{Rank}(X)=1\\ \end{array}$ (2)

The conditions $X\succeq 0$ and $\textbf{Rank}(X)=1$ in formula 2 guarantees that $X$ can be written as $x^Tx$, for some $x$. But this formulation should be relaxed before it can be solved efficiently, because the constraintS $\textbf{Card}(X)\leq k^2$ and $\textbf{Rank}(X)=1$ are not convex. So we replace the cardinality constraint with a weaker one: $\textbf{1}^T|X|\textbf 1\leq k$. We also drop the rank constraint. So we get:

 $\begin{array}{ll} \textrm{maximize}& \textbf{Tr}(AX)\\ \textrm{subject\ to}&\textbf{Tr}(X)=1\\ &\textbf{1}^T|X|\textbf 1\leq k\\ &X\succeq 0\\ \end{array}$

The above semidefinite relaxation can even be generalised to a non square matrix $A \in R^{mxn}$ as follows:

 $\begin{array}{ll} \textrm{maximize}& \textbf{Tr}(AX^{12})\\ \textrm{subject\ to}&\textbf{Tr}(X^{ii})=1\\ &\textbf{1}^T|X^{ii}|\textbf 1\leq k_i, i=1,2\\ &\textbf{1}^T|X^{12}|\textbf 1\leq \sqrt{k_1k_2}\\ &X\succeq 0\\ \end{array}$

in the variable $X \in S^{m+n}$ with blocks $X^{ij}$ for i,j=1,2.

We then change the modified cardinality constraint to a penalty term in the goal function with some positive factor $\rho$. So we get a semidefinite form of the problem:

 $\begin{array}{ll} \textrm{maximize}& \textbf{Tr}(AX)-\rho\textbf{1}^T|X|\textbf 1\\ \textrm{subject\ to}&\textbf{Tr}(X)=1\\ &X\succeq 0\\ \end{array}$ (3)

where, $\rho$ controls the penalty magnitude.

The goal function can be rewritten as $\textbf{Tr}(AX)-\rho\textbf{1}^T|X|\textbf 1=\min_{|U_{ij}|\leq\rho}\textbf{Tr}((A+U)X)$. So the problem (3) is equivalent to:

 $\begin{array}{ll} \textrm{maximize}& \min_{|U_{ij}|\leq\rho}\textbf{Tr}(X(A+U))\\ \textrm{subject\ to}&\textbf{Tr}(X)=1\\ &X\succeq 0\\ \end{array}$ (4)

or equivalently, due to convexity:

 $\begin{array}{lll} \textrm{minimize}& \lambda^{\max}(A+U)\\ \textrm{subject\ to}&|U_{ij}|\leq\rho,&i,j=1\cdots,n\\ \end{array}$ (5)

where $\lambda^{\max}(M)$ is the largest eigenvalue of the matrix $M$.

The problem as described in formulation (5) can be seen as computing a robust version of maximum eigenvalue: it is the least possible value of maximum eigenvalue, given that each element can be changed by at most noise value $\rho$. Also, it corresponds to a worst-case maximum eigenvalue computation with a bounded noise of intensity $\rho$ in each component on the matrix coefficients.

The KKT conditions for optimization problems (3) and (5) are given by:

 $\left\{ \begin{array}{rl} &(A+U)X=\lambda^{\max}(A+U)X\\ &U\circ X=\rho |X| \\ &\text{Tr}(X)=1,\,\,\,X\succeq 0 \\ &|U_{i,j}|\leq \rho,\,\,\, i,j=1,\cdots ,n \end{array} \right.$

If the $\lambda^{\max}$ in the first equation is simple (meaning it is of multiplicity 1) and $\rho$ is sufficiently small, from the first equation it follows that $\textbf{Rank}(X)=1$. In fact, the form of this equation implies that all columns of $\,X$ are eigenvectors of matrix $A+U$ corresponding to its maximum eigenvalue. So the rank one constraint is automatically satisfied in this special case.

## The Algorithm

### The Main Loop

The algorithm should iteratively create the semidefinite program (4) and solve it to obtain the next most important sparse principle component. At each iteration, we should first obtain the solution $x$ of the corresponding problem of form (1), if $X$ is the optimal solution of the optimization problem. That will be straightforward if $X$ is of rank 1, but since we have dropped the rank constraint, this may not be true and in those cases we need to obtain the dominant eigenvalue of $X$ by the methods that are known in the literature; for example we can use the power method which efficiently provides us with the largest eigenvectors of a matrix. Note, however, that in this case the resulting vectors are not guaranteed to be as sparse as the matrix itself. After obtaining a (hopefully) sparse vector $x$ we replace the matrix $A$ with $A-(x_1^TAx_1)x_1x_1^T$ and repeat the above steps to obtain the next sparse component values.

The question then is "when to stop?". Two approaches are proposed. First, at each iteration $i$, for all $i\lt j$, we include the constraint $x_i^TXx_i=0$ to make sure that each principal component we compute is orthogonal to the previous ones. Then the procedure stops after $n$ steps automatically (there will be no solution to the $n+1$'th problem).

The other way is stop as soon as all members of $A$ get less than $\rho$, because at that point elements of $A$ will be less than the noise value $\rho$.

### Solving the Semidefinite Problem

The cardinality constraint in the formulation (or its corresponding term in the penalized form) introduces a quadratic number of terms in the problem. This makes it practically impossible to use interior-point method to solve the problem for large values of the input dimension. So we need to use other existing methods for solving the problem, but then, there will be a matter of speed. Denoting the required accuracy by $\epsilon$, we can expect an interior-point-based program to converge after $\textstyle O(F(n)\log\frac1{\epsilon})$ iterations, for some function $F$ of input size $n$. Here we will manage to solve the problem using $\textstyle O(\frac{F(n)}{\epsilon})$ iterations using a first-order scheme, for some function $F$ of input size.

First-order method can solve a problem after $\textstyle O(\frac1{\epsilon})$ iterations, for some function $F$ of input size, if the problem satisfy a smoothness constraint. But in our case, $X\succeq 0$ is not smooth and as a result, an application of first-order method to the our problem will result in an algorithm stopping after $\textstyle O(\frac1{\epsilon^2})$ iterations, for some function $F$ of input size, which is too slow. To address this problem, we consider the formulation (5) and then we define a smooth approximation of the function $\lambda^{\max}$.

To come up with a smooth approximation of our goal function, we define $f_{\mu}(X)=\mu\log\textbf{Tr}(e^{\frac X\mu})$. Then, one can verify that $\lambda^{\max}(X)\leq f_\mu(X)\leq\lambda^{\max}(X)+\mu\log n$ and so, for $\textstyle\mu=\frac{\epsilon}{\log n}$, $f_{\mu}$ is a smooth approximation of $\lambda^{\max}$ with an additive error of $\epsilon$. This way we obtain a scheme for solving the program in $\textstyle\frac d{\epsilon}\sqrt{\log d}$ iterations, each taking $O(d^3)$ time.

## Experimental Results

Each point in figures below corresponds to an experiment on 500 genes. The points are pre-clustered to 4 clusters based some prior knowledge. The top three principal components are computed using each of PCA and Sparse PCA methods, and the points are plotted in the bases defined by these three components. For the PCA, each principal component is a combination of all 500 variables (corresponding to 500 genes) while in sparse PCA each involves variables corresponding to at most 6 genes.

Figure 1. Distribution of gene expression data in the PCA vs. Sparse PCA. The point colors are based on an pre-computed independent clustering.

In the next figure, the left diagram compares the cumulative number of non-zero elements in principal components in three methods: SPCA, the method we explaind with $k=5$, and the method we explaind with $k=6$. In the right diagram, the cumulative percentage of total variance explained by the first principle components resulted from PCA, SPCA (dashed) and the method we explained with $k=5$ and $k=6$ (solid lines).

Figure 2. Cumulative cardinality and total percentage of total variance explained by the first principle components resulted from PCA, SPCA (dashed) and the method we explained with $k=5$ and $k=6$ (solid lines).