deflation Methods for Sparse PCA: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 20: Line 20:
<math>\textbf{A}_{t}</math>.
<math>\textbf{A}_{t}</math>.
====Proposition====
====Proposition====
If <!-- MathType@MTEF@5@5@+= -->
If  
<!-- faaagaart1ev2aaaKnaaaaWenf2ys9wBH5garuavP1wzZbqedmvETj -->
<!-- 2BSbqefm0B1jxALjharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0x -->
<!-- bbL8FesqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaq -->
<!-- pepae9pg0FirpepeKkFr0xfr=xfr=xb9Gqpi0dc9adbaqaaeGaciGa -->
<!-- aiaabeqaamaabaabaaGcbaGaeq4UdW2aaSbaaSqaaiaaigdaaeqaaO -->
<!-- GaeyyzImRaaiOlaiaac6cacaGGUaGaeyyzImRaeq4UdW2aaSbaaSqa -->
<!-- aiaadchaaeqaaOGaaiilaiaabccacaqGHbGaaeOCaiaabwgacaqGGa -->
<!-- GaaeiDaiaabIgacaqGLbGaaeiiaiaabwgacaqGPbGaae4zaiaabwga -->
<!-- caqGUbGaaeODaiaabggacaqGSbGaaeyDaiaabwgacaqGZbGaaeiiai -->
<!-- aab+gacaqGMbGaaeiiaiaadgeacqGHiiIZtuuDJXwAK1uy0HMmaeHb -->
<!-- fv3ySLgzG0uy0HgiuD3BaGqbaiab=jj8tnaaDaaaleaacqGHRaWkae -->
<!-- aacaWGWbaaaOGaaiilaiaadIhadaWgaaWcbaGaaGymaaqabaGccaGG -->
<!-- UaGaaiOlaiaac6cacaWG4bWaaSbaaSqaaiaadchaaeqaaOGaaeyyai -->
<!-- aabkhacaqGLbGaaeiiaiaabshacaqGObGaaeyzaiaabccacaqGJbGa -->
<!-- ae4BaiaabkhacaqGYbGaaeyzaiaabohacaqGWbGaae4Baiaab6gaca -->
<!-- qGKbGaaeyAaiaab6gacaqGNbGaaeiiaiaabwgacaqGPbGaae4zaiaa -->
<!-- bwgacaqGUbGaaeODaiaabwgacaqGJbGaaeiDaiaab+gacaqGYbGaae -->
<!-- 4CaiaabYcacaqGGaGaaeyyaiaab6gacaqGKbGaaeiiaiqadgeagaqc -->
<!-- aiabg2da9iaadgeacqGHsislcaWG4bWaaSbaaSqaaiaadQgaaeqaaO -->
<!-- GaamiEamaaDaaaleaacaWGQbaabaGaamivaaaakiaadgeacaWG4bWa -->
<!-- aSbaaSqaaiaadQgaaeqaaOGaamiEamaaDaaaleaacaWGQbaabaGaam -->
<!-- ivaaaakiaabAgacaqGVbGaaeOCaiaabccacaqGZbGaae4Baiaab2ga -->
<!-- caqGLbGaaeiiaiaadQgacqGHiiIZcaaIXaGaaiilaiaac6cacaGGUa -->
<!-- GaaiOlaiaadchacaGGSaGaaeiDaiaabIgacaqGLbGaaeOBaiaabcca -->
<!-- ceWGbbGbaKaacaqGGaGaaeiAaiaabggacaqGZbGaaeiiaiaabwgaca -->
<!-- qGPbGaae4zaiaabwgacaqGUbGaaeODaiaabwgacaqGJbGaaeiDaiaa -->
<!-- b+gacaqGYbGaae4CaiaabccacaWG4bWaaSbaaSqaaiaaigdaaeqaaO -->
<!-- Gaaiilaiaac6cacaGGUaGaaiOlaiaadIhadaWgaaWcbaGaamiCaaqa -->
<!-- baGccaqG3bGaaeyAaiaabshacaqGObGaaeiiaiaabogacaqGVbGaae -->
<!-- OCaiaabkhacaqGLbGaae4CaiaabchacaqGVbGaaeOBaiaabsgacaqG -->
<!-- PbGaaeOBaiaabEgacaqGGaGaaeyzaiaabMgacaqGNbGaaeyzaiaab6 -->
<!-- gacaqG2bGaaeyyaiaabYgacaqG1bGaaeyzaiaabohacaqGGaGaeq4U -->
<!-- dW2aaSbaaSqaaiaaigdaaeqaaOGaaiilaiaac6cacaGGUaGaaiOlai -->
<!-- abeU7aSnaaBaaaleaacaWGQbGaeyOeI0IaaGymaaqabaGccaGGSaGa -->
<!-- aGimaiaacYcacqaH7oaBdaWgaaWcbaGaamOAaiabgUcaRiaaigdaae -->
<!-- qaaOGaaiilaiabeU7aSnaaBaaaleaacaWGWbaabeaakiaac6caaaa@F41D@ -->
 
<math>\lambda _{1}\ge ...\ge \lambda _{p},\text{ are the eigenvalues of }A\in \mathbb{S}_{+}^{p},x_{1}...x_{p}\text{are the corresponding eigenvectors, and }\hat{A}=A-x_{j}x_{j}^{T}Ax_{j}x_{j}^{T}\text{for some }j\in 1,...p,\text{then }\hat{A}\text{ has eigenvectors }x_{1},...x_{p}\text{with corresponding eigenvalues }\lambda _{1},...\lambda _{j-1},0,\lambda _{j+1},\lambda _{p}.</math>
<math>\lambda _{1}\ge ...\ge \lambda _{p},\text{ are the eigenvalues of }A\in \mathbb{S}_{+}^{p},x_{1}...x_{p}\text{are the corresponding eigenvectors, and }\hat{A}=A-x_{j}x_{j}^{T}Ax_{j}x_{j}^{T}\text{for some }j\in 1,...p,\text{then }\hat{A}\text{ has eigenvectors }x_{1},...x_{p}\text{with corresponding eigenvalues }\lambda _{1},...\lambda _{j-1},0,\lambda _{j+1},\lambda _{p}.</math>

Revision as of 02:11, 15 November 2010

Introduction

Principal component analysis (PCA) is a popular change of variables technique used in dimension reduction and visualization. The goal of PCA is to extract several principal components, linear combinations of input variables that together best account for the variance in a data set. Often, PCA is formulated as an eigenvalue decomposition problem: each eigenvector of the sample covariance matrix of a data set corresponds to the loadings or coefficients of a principal component. A common approach to solving this partial eigenvalue decomposition is to iteratively alternate between two subproblems: rank-one variance maximization and matrix deflation.

A primary drawback of PCA is its lack of sparsity. Each principal component is a linear combination of all variables, and the loadings are typically non-zero. Sparsity is desirable as it often leads to more interpretable results, reduced computation time, and improved generalization. In analogy to the PCA setting, many authors attempt to solve the sparse PCA problem by iteratively alternating between two subtasks: cardinality-constrained rank-one variance maximization and matrix deflation. The former is a hard problem, and a variety of relaxations and approximate solutions have been developed in the literature. The latter subtask has received relatively little attention and is typically borrowed without justification from the PCA context. In this paper, author demonstrates that the standard PCA deflation procedure is seldom appropriate for the sparse PCA setting. To rectify the situation, he first develops several heuristic deflation alternatives with more desirable properties, then reformulate the sparse PCA optimization problem to explicitly reflect the maximum additional variance objective on each round. The result is a generalized deflation procedure that typically outperforms more standard techniques on real-world datasets.

Notation

[math]\displaystyle{ {I} }[/math] is the identity matrix. [math]\displaystyle{ \mathbb{S}^{p}_{+} }[/math] is the set set of all symmetric, positive semidefinite matrices in [math]\displaystyle{ \mathbb{R}^{p\times p} }[/math]. [math]\displaystyle{ \textbf{Card}(x) }[/math] represents the cardinality of or number of non-zero entries in the vector [math]\displaystyle{ x }[/math].

Deflation methods

A matrix deflation modifies a matrix to eliminate the influence of a given eigenvector, typically by setting the associated eigenvalue to zero.

Hotelling's deflation and PCA

In the PCA setting, the goal is to extract the [math]\displaystyle{ r }[/math] leading eigenvectors of the sample covariance matrix, [math]\displaystyle{ \textbf{A}_0 \in\mathbb{S}^{p}_{+} }[/math], as its eigenvectors are equivalent to the loadings of the first [math]\displaystyle{ r }[/math] principal components. Hotelling’s deflation method is a simple and popular technique for sequentially extracting these eigenvectors. On the [math]\displaystyle{ t }[/math]-th iteration of the deflation method, we first extract the leading eigenvector of [math]\displaystyle{ \textbf{A}_{t -1} }[/math],

[math]\displaystyle{ x_t = argmax_{x:x^Tx=1}x^T\textbf{A}_{t-1}x }[/math]

and we then use Hotelling's deflation to annihilate [math]\displaystyle{ x_t }[/math]:

[math]\displaystyle{ \textbf{A}_{t}=\textbf{A}_{t-1}-x_tx^T_t\textbf{A}_{t-1}x_tx^T_t }[/math]

The deflation step ensures that the [math]\displaystyle{ t }[/math]+1-st leading eigenvector of A0 is the leading eigenvector of [math]\displaystyle{ \textbf{A}_{t} }[/math].

Proposition

If [math]\displaystyle{ \lambda _{1}\ge ...\ge \lambda _{p},\text{ are the eigenvalues of }A\in \mathbb{S}_{+}^{p},x_{1}...x_{p}\text{are the corresponding eigenvectors, and }\hat{A}=A-x_{j}x_{j}^{T}Ax_{j}x_{j}^{T}\text{for some }j\in 1,...p,\text{then }\hat{A}\text{ has eigenvectors }x_{1},...x_{p}\text{with corresponding eigenvalues }\lambda _{1},...\lambda _{j-1},0,\lambda _{j+1},\lambda _{p}. }[/math]