# Difference between revisions of "sparse PCA"

## Introduction

Given $n$ observations on $d$ variables (or in other words $n$ $d$-dimensional data points), in PCA the goal is to find directions in the space of the data set that correspond to the directions with biggest variance in the input data. In the practice each of the $d$ variables has its own special meaning and it may be desirable to come up with some directions, as principal components, each of which is a combination of just a few of these variables. This makes the directions more interpretable and meaningful. But this is not something that usually happens in the oridinal result of PCA method. Each of resulting directions from PCA in most cases is rather a linear combination of all variable with no zero coefficients. In other words, this helps perform feature selection, by selecting a subset of features in each direction.

The address the above concern we add a sparsity constraint to the PCA problem, which makes the PCA problem much harder to solve. That's because we have just added a combinatorial constraint to out optimization problem. This paper is about how to find directions in the data space with maximum variance that have a limited number of non-zero elements.

## Problem Formulation

Given the covariance matrix $A$, the problem can be written as:

 $\begin{array}{ll} \textrm{maximize}& x^TAx\\ \textrm{subject\ to}& ||x||_2=1\\ &\textbf{Card}(x)\leq k \end{array}$ (1)

Defining $X=x^Tx$, the above formula can be rewritten as

 $\begin{array}{ll} \textrm{maximize}& \textbf{Tr}(AX)\\ \textrm{subject\ to}&\textbf{Tr}(X)=1\\ &\textbf{Card}(x)\leq k^2\\ &X\succeq 0, \textbf{Rank}(X)=1\\ \end{array}$ (2)

The conditions $X\succeq 0$ and $\textbf{Rank}(X)=1$ in formula 2 guarantees that $X$ can be written as $x^Tx$, for some $x$. But this formulation should be relaxed before it can be solved efficiently, because the constraintS $\textbf{Card}(x)\leq k^2$ and $\textbf{Rank}(X)=1$ are not convex. So we replace the cardinality constraint with a weaker one: $\textbf{1}^T|X|\textbf 1\leq k$. We also drop the rank constraint. So we get:

 $\begin{array}{ll} \textrm{maximize}& \textbf{Tr}(AX)\\ \textrm{subject\ to}&\textbf{Tr}(X)=1\\ &\textbf{1}^T|X|\textbf 1\leq k\\ &X\succeq 0\\ \end{array}$

We then change the modified cardinality constraint to a penalty term in the goal function with some positive factor $\rho$. So we get a semidefinite form of the problem:

 $\begin{array}{ll} \textrm{maximize}& \textbf{Tr}(AX)-\rho\textbf{1}^T|X|\textbf 1\\ \textrm{subject\ to}&\textbf{Tr}(X)=1\\ &X\succeq 0\\ \end{array}$ (3)

The goal function can be rewritten as $\textbf{Tr}(AX)-\rho\textbf{1}^T|X|\textbf 1=\min_{|U_{ij}|\leq\rho}\textbf{Tr}((A+U)X)$. So the problem (3) is equivalent to:

 $\begin{array}{ll} \textrm{maximize}& \min_{|U_{ij}|\leq\rho}\textbf{Tr}(X(A+U)\\ \textrm{subject\ to}&\textbf{Tr}(X)=1\\ &X\succeq 0\\ \end{array}$ (4)

or equivalently, due to convexity:

 $\begin{array}{lll} \textrm{minimize}& \lambda^{\max}(A+U))\\ \textrm{subject\ to}&|U_{ij}|\leq\rho,&i,j=1\cdots,n\\ \end{array}$ (5)

The problem as described in formulation (5) can be seen as computing a robust version of maximum eigenvalue: it is the least possible value of maximum eigenvalue, given that each element can be changed by at most noise value $\rho$.

## The Algorithm

### The Main Loop

The algorithm should iteratively create the semidefinite program (4) and solve it to obtain the next most important sparse principle component. At each iteration, if $X$ is the optimal solution of the optimization problem, we first need to obtain the solution $x$ of the corresponding problem of form (1). That will be straightforward if $X$ is of rank 1, but since we have dropped the rank constraint, this may not be true and in those cases we need to obtain the dominant eigenvalue of $X$ by methods that are known in the literature. After obtaining a (hopefully) sparse vector $x$ we replace the matrix $A$ with $A-(x_1^TAx_1)x_1x_1^T$ and repeat the above steps to obtain next sparse component values.

The question then is when to stop. Two approaches are proposed. First is that at each iteration $i$, for all $i\lt j$, we include the constraint $x_i^TXx_i=0$ to make sure each principal component we compute is orthogonal to the previous ones. Then the procedure stops after $n$ steps automatically (there will be no solution to the $n+1$'th problem). The other way is stop as soon as all members of $A$ get less than $\rho$, because at that point elements of $A$ will be less than the noise value $\rho$.

### Solving the Semidefinite Problem

The cardinality constraint in the formulation (or its corresponding term in the penalized form) introduces a quadratic number of terms in the problem. This makes it practically impossible to use interior-point method to solve the problem for large values of the input dimension. So we need to use other existing methods for solving the problem, but then, there will be a matter of speed. Denoting the required accuracy by $\epsilon$, we can expect an interior-point-based program to converge after $\textstyle O(F(n)\log\frac1{\epsilon})$ iterations, for some function $F$ of input size $n$. Here we will manage to solve the problem using $\textstyle O(\frac{F(n)}{\epsilon})$ iterations using a first-order scheme, for some function $F$ of input size.

First-order method can solve a problem after $\textstyle O(\frac1{\epsilon})$ iterations, for some function $F$ of input size, if the problem satisfy a smoothness constraint. But in our case, $X\succeq 0$ is not smooth and as a result, an application of first-order method to the our problem will result in an algorithm stopping after $\textstyle O(\frac1{\epsilon^2})$ iterations, for some function $F$ of input size, which is too slow. To address this problem, we consider the formulation (5) and then we define a smooth approximation of the function $\lambda^{\max}$.

To come up with a smooth approximation of our goal function, we define $f_{\mu}(X)=\mu\log\textbf{Tr}(e^{\frac X\mu})$. Then, one can verify that $\lambda^{\max}(X)\leq f_\mu(X)\leq\lambda^{\max}(X)+\mu\log n$ and so, for $\textstyle\mu=\frac{\epsilon}{\log n}$, $f_{\mu}$ is a smooth approximation of $\lambda^{\max}$ with an additive error of $\epsilon$. This way we obtain a scheme for solving the program in $\textstyle\frac d{\epsilon}\sqrt{\log d}$ iterations, each taking $O(d^3)$ time.

## Experimental Results

Each point in figures below corresponds to an experiment on 500 genes. The points are pre-clustered to 4 clusters based some prior knowledge. The top three principal components are computed using each of PCA and Sparse PCA methods, and the points are plotted in the bases defined by these three components. For the PCA, each principal component is a combination of all 500 variables (corresponding to 500 genes) while in sparse PCA each involves variables corresponding to at most 6 genes.

Figure 1. Distribution of gene expression data in the PCA vs. Sparse PCA. The point colors are based on an pre-computed independent clustering.

In the next figure, the left diagram compares the cumulative number of non-zero elements in principal components in three methods: SPCA, the method we explaind with $k=5$, and the method we explaind with $k=6$. In the right diagram, the cumulative percentage of total variance explained by the first principle components resulted from PCA, SPCA (dashed) and the method we explained with $k=5$ and $k=6$ (solid lines).

Figure 2. Cumulative cardinality and total percentage of total variance explained by the first principle components resulted from PCA, SPCA (dashed) and the method we explained with $k=5$ and $k=6$ (solid lines).