# Difference between revisions of "a Penalized Matrix Decomposition, with Applications to Sparse Principal Components and Canonical Correlation Analysis"

## Introduction

Matrix decompositions or factorizations are useful tools in identifying the underlying structure of a matrix and the data it represents. However, many of these decompositions produce dense factors which are hard to interpret. Enforcing sparsity within the factors produces factorizations which are more amenable to interpretation. In their paper, Witten, Tibshirani, and Hastie<ref name="WTH2009">Daniela M. Witten, Robert Tibshirani, and Trevor Hastie. (2009) "A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis". Biostatistics, 10(3):515–534.</ref> develop a penalized matrix decomposition (PMD) using penalty functions on the factors to ensure sparsity and ease of interpretation. They divide their paper into three major components. They begin by presenting their algorithm for PMD and derive efficient versions for two sets of common penalty functions. In addition, they use a particular form of their algorithm to derive a sparse version of principal component analysis (PCA). Comparing this version to two other sparse PCA methods by Jolliffe and others<ref name="JTU2003">Ian T. Jolliffe, Nickolay T. Trendafilov, and Mudassir Uddin. (2003) "A modified principal component technique based on the lasso". Journal of Computational and Graphical Statistics, 12(3):531–547. </ref> and Zou and others<ref name="ZHT2006">Hui Zou, Trevor Hastie, and Robert Tibshirani. (2006) "Sparse Principal Component Analysis". Journal of Computational and Graphical Statistics, 15(2):265–286.</ref> they show how these three methods are related. In particular, they show how their sparse PCA algorithm can be used to efficiently solve the SCoTLASS problem proposed by Jolliffe and others<ref name="JTU2003"/>; a computationally hard problem to solve in its original form. Finally, they use their PMD to yield a new method for penalized canonical correlation analysis (CCA). The main application of this procedure is to genomic data. They argue that since it is becoming increasingly common for biologists to perform multiple assays on the same set of samples, there is an increased need for methods that perform inference across data sets. To this end, they demonstrate their penalized CCA method on a genomic data set consisting of gene expression and DNA copy number measurements on the same set of patient samples. Using penalized CCA they can identify sets of genes that are correlated with regions of copy number change.

## Penalized Matrix Decomposition

The PMD is a generalization of the singular value decomposition (SVD). The SVD of a matrix $\textbf{X} \in \mathbb{R}^{n \times p}$ with rank($K$) $\leq \min(n,p)$ can be written as follows

$\textbf{X} = \textbf{U}\textbf{D}\textbf{V}^T, \; \textbf{U}^T\textbf{U} = \textbf{I}_n, \; \textbf{V}^T\textbf{V} = \textbf{I}_p, \; d_1 \geq d_2 \geq \cdots \geq d_k \gt 0. \;\; (1)$

In their paper the authors assume the overall mean of $\textbf{X}$ is 0. If we let $\textbf{u}_k$ be the $k$th column of $\textbf{U}$, $\textbf{v}_k$ the $k$th column of $\textbf{V}$, and $d_k$ the $k$th diagonal element of the diagonal matrix $\textbf{D}$ then the SVD of $\textbf{X}$ can also be written in the following form

$\textbf{X} = \sum_{k=1}^K d_k\textbf{u}_k\textbf{v}_k^T= arg \min_{\hat\textbf{X}\in M(r)}\|\textbf{X}-\hat\textbf{X}|^2_F . \;\; (2)$

It can be seen that the first $r$ components of the singular value decomposition will give the best rank-$r$ approximation to the matrix.

By imposing alternative constraints on $\textbf{u}_k$ and $\textbf{v}_k$ the authors derive a penalized matrix decomposition (PMD). For simplicity, the authors begin by considering the rank-1 approximation. In this case, the PMD is the solution to the following optimization problem:

$\min_{d,\textbf{u},\textbf{v}} \frac{1}{2}\|\textbf{X} - d\textbf{u}\textbf{v}^T \|^2_F \; \textrm{ subject } \; \textrm{ to } \; \| \textbf{u} \|_2^2 = 1, \; \| \textbf{v} \|^2_2 = 1, \; P_1(\textbf{u}) \leq c_1, \; P_2(\textbf{v}) \leq c_2, \; d \geq 0, \;\; (3)$

where the convex penalty functions, $P_1$ and $P_2$, can take on a number of different forms. The authors focus their analysis on the lasso function and fused lasso function which have the following functional forms:

1. lasso: $P_1(\textbf{u}) = \sum_{i=1}^n |u_i|$ and
2. fused lasso: $P_1(\textbf{u}) = \sum_{i=1}^n |u_i| + \lambda \sum_{i=1}^n |u_i-u_{i-1}|$, where $\lambda \gt 0$.

The values of $c_1$ and $c_2$ in (3) are user defined parameters which can be determined by cross-validation or can be chosen to give a desired level of sparsity in the factors.

Before solving (3), the authors simplify the objective function using the following theorem.

### Theorem 1

Let $\textbf{U}$ and $\textbf{V}$ be $n \times K$ and $p \times K$ orthogonal matrices and $\textbf{D}$ a diagonal matrix with diagonal elements $d_k$. Then

$\frac{1}{2}\|\textbf{X} - \textbf{U}\textbf{D}\textbf{V}^T \|^2_F = \frac{1}{2}\|\textbf{X}\|^2_F - \sum_{k=1}^K\textbf{u}_k^T\textbf{X}\textbf{v}_k{d_k} + \frac{1}{2}\sum_{k=1}^K d_k^2. \;\;\;(4)$

Proof

From Theorem 1 it is clear that the values of $\textbf{u}$ and $\textbf{v}$ that solve (3) must also solve the following problem:

$\max_{\textbf{u},\textbf{v}} \textbf{u}^T\textbf{X}\textbf{v} \; \textrm{ subject } \; \textrm{ to } \; \| \textbf{u} \|_2^2 = 1, \; \| \textbf{v} \|^2_2 = 1, \; P_1(\textbf{u}) \leq c_1, \; P_2(\textbf{v}) \leq c_2, \;\;\; (5)$

where $d = \textbf{u}^T\textbf{X}\textbf{v}$. The goal is to transform (5) into a biconvex problem in $\textbf{u}$ and $\textbf{v}$. Then each of the convex subproblems can be used to iteratively solve the optimization problem. The objective function $\textbf{u}^T\textbf{X}\textbf{v}$ is bilinear in $\textbf{u}$ and $\textbf{v}$; however, each of the subproblems in not convex due to the $L_2$-equality penalty on $\textbf{u}$ and $\textbf{v}$. Yet, if the $L_2$-penalty is relaxed then the (rank-1) PMD becomes

$\max_{\textbf{u},\textbf{v}} \textbf{u}^T\textbf{X}\textbf{v} \; \textrm{ subject } \; \textrm{ to } \; \| \textbf{u} \|_2^2 \leq 1, \; \| \textbf{v} \|^2_2 \leq 1, \; P_1(\textbf{u}) \leq c_1, \; P_2(\textbf{v}) \leq c_2, \;\;\; (6)$

and with $\textbf{v}$ fixed the following convex subproblem results:

$\max_{\textbf{u}} \textbf{u}^T\textbf{X}\textbf{v} \; \textrm{ subject } \; \textrm{ to } \; \| \textbf{u} \|_2^2 \leq 1, \; P_1(\textbf{u}) \leq c_1, \;\;\;(7)$

A similar convex sub-problem is obtained for $\textbf{v}$ with $\textbf{u}$ fixed. Thus, (6) is biconvex in $\textbf{u}$ and $\textbf{v}$ and the following iterative algorithm can be used to solve (6).

### Algorithm 1: Computation of single-factor PMD

1. Initialize $\textbf{v}$ to have $L_2$-norm equal to 1.
2. Iterate until convergence:
1. $\textbf{u} \leftarrow \arg \max_{\textbf{u}} \textbf{u}^T\textbf{X}\textbf{v} \; \textrm{ subject } \; \textrm{ to } \; \| \textbf{u} \|_2^2 \leq 1, \; P_1(\textbf{u}) \leq c_1$.
2. $\textbf{v} \leftarrow \arg \max_{\textbf{v}} \textbf{u}^T\textbf{X}\textbf{v} \; \textrm{ subject } \; \textrm{ to } \; \| \textbf{v} \|_2^2 \leq 1, \; P_2(\textbf{v}) \leq c_2$.
3. $d \leftarrow \textbf{u}^T\textbf{X}\textbf{v}$.

Since the PMD is a generalization of the SVD, we should expect the rank-1 PMD algorithm to give the leading singular vectors, if the constraints on $\textbf{u}$ and $\textbf{v}$ imposed by the penalty functions are removed.

$\textbf{v}^{(i)}=\frac{(\textbf{X}^T\textbf{X})^i\textbf{v}^{(0)}}{\|(\textbf{X}^T\textbf{X})^i\textbf{v}^{(0)}\|_2}$

Without the penalty functions, Algorithm 1 reduces to the power method ,that is shown in the above equation, for computing the largest eigenvector of $\textbf{X}^T\textbf{X}$ which gives the leading singular vector of $\textbf{X}$; thus this goal is attained by the proposed algorithm. Although the algorithm does not necessarily converge to a global optimum for the relaxed problem; the empirical studies performed by the authors indicate that the algorithm does converge to interpretable factors for appropriate choices of the penalty terms. The authors also note that the algorithm has the desirable property that at each iteration of Step 2, the objective function in the relaxed problem decreases.

Algorithm 1 can easily be extended to obtain multiple factors. This can be done by minimizing the single-factor criterion (6) repeatedly, each time using as the $\textbf{X}$ matrix the residual obtained by subtracting from the data matrix the previous factors found. This results in the following algorithm

### Algorithm 2: Computation of $K$ factors of PMD

1. Let $\textbf{X}^1 \leftarrow \textbf{X}$.
1. Find $\textbf{u}_k$, $\textbf{v}_k$ and $\ d_k$ by applying the single-factor PMD algorithm (Algorithm 1) to data $\textbf{X}^k$.
2. $\textbf{X}^1 \leftarrow \textbf{X}^k - d_k\textbf{u}_k\textbf{v}_k^T$.

Without the $P_1$- and $P_2$-penalty constraints, the $K$-factor PMD algorithm leads to the rank-$K$ SVD of $\textbf{X}$. However, with $P_1$ and/or $P_2$, it is important to note that the solutions are not orthogonal, unlike the SVD, since they are not in row and vector space.

Algorithms 1 and 2 are written for generic penalty functions, $P_1$ and $P_2$. By specifying functional forms for $P_1$ and $P_2$ neither Algorithm 1 nor Algorithm 2 change, only Step 2 of Algorithm 1 must be modified to solve the specific optimization problem given by the functional forms of $P_1$ and $P_2$. There are two specific forms that authors focus on in their analysis which they call PMD($L_1$,$L_1$) and PMD($L_1$,FL).

## PMD($L_1$,$L_1$)

The PMD($L_1$,$L_1$) criterion is as follows:

$\max_{\textbf{u},\textbf{v}} \textbf{u}^T\textbf{X}\textbf{v} \; \textrm{ subject } \; \textrm{ to } \; \| \textbf{u} \|_2^2 \leq 1, \; \| \textbf{v} \|^2_2 \leq 1, \; \| \textbf{u} \|_1 \leq c_1, \; \| \textbf{v} \|_1 \leq c_2. \;\;\; (8)$

This method results in factors $\textbf{u}$ and $\textbf{v}$ that are sparse for $c_1$ and $c_2$ chosen appropriately. In order to guarantee a feasible solution $c_1$ and $c_2$ must be restricted to the ranges $1 \leq c_1 \leq \sqrt{n}$ and $1 \leq c_2 \leq \sqrt{p}$. The solution to the optimization problem in (8) is a function of the soft thresholding operator, denoted by $S$ where $S(a,c) = \textrm{sgn}(a)(|a| - c)_+$, $c \gt 0$ is a constant and $x_+$ is defined equal to $x$ if $x \gt 0$ and 0 otherwise. Using the following lemma, we can see how to define a solution to (8) in terms of $S$.

### Lemma 1

Consider the optimization problem

$\max_{\textbf{u}} \textbf{u}^T\textbf{a} \; \textrm{ subject } \; \textrm{ to } \; \| \textbf{u} \|_2^2 \leq 1, \; \| \textbf{u} \|_1 \leq c. \;\;\; (9)$

The solution satisfies $\textbf{u} = \frac{S(\textbf{a},\Delta)}{\|S(\textbf{a},\Delta)\|_2}$, with $\Delta = 0$ if this results in $\| \textbf{u} \|_1 \leq c$; otherwise $\Delta$ is chosen so that $\| \textbf{u} \|_1 = c$.
Proof

So if $a = \textbf{X}\textbf{v}$ in Lemma 1, then to solve the PMD criterion in (8) using Algorithm 1, Steps 2(a) and 2(b) can be adjusted as follows.

1. Iterate until convergence:
1. $\textbf{u} \leftarrow \frac{S(\textbf{X}\textbf{v},\Delta_1)}{\|S(\textbf{X}\textbf{v},\Delta_1)\|_2}$, where $\Delta_1 = 0$ if this results in $\| \textbf{u} \|_1 \leq c_1$; otherwise $\Delta_1$ is chosen to be a positive constant such that $\| \textbf{u} \|_1 = c_1$.
2. $\textbf{v} \leftarrow \frac{S(\textbf{X}^T\textbf{u},\Delta_2)}{\|S(\textbf{X}^T\textbf{u},\Delta_2)\|_2}$, where $\Delta_2 = 0$ if this results in $\| \textbf{v} \|_1 \leq c_2$; otherwise $\Delta_2$ is chosen to be a positive constant such that $\| \textbf{v} \|_1 = c_2$.

To find the values of $\Delta_1$ and $\Delta_2$ for each update of $\textbf{u}$ and $\textbf{v}$ the authors suggest using a binary search.
It can be seen that the intersection of $L_1$ and $L_2$constraint results in both $\|u_1\|$ and $u_2$ nonzero.

## PMD($L_1$,FL)

The PMD($L_1$,FL) criterion is as follows (where "FL"; stands for fused lasso penalty):

$\max_{\textbf{u},\textbf{v}} \textbf{u}^T\textbf{X}\textbf{v} \; \textrm{ subject } \; \textrm{ to } \; \| \textbf{u} \|_2^2 \leq 1, \; \| \textbf{v} \|^2_2 \leq 1, \; \| \textbf{u} \|_1 \leq c_1, \; \sum_j|v_j| + \lambda\sum_j|v_j-v_{j-1}| \leq c_2. \;\;\; (10)$

Since the constraints on $\textbf{u}$ remain the same we expect the resulting $\textbf{u}$ to be sparse and the fused lasso constraint on $\textbf{v}$ should result in $\textbf{v}$ sparse and somewhat smooth (depending on the value of $\lambda \geq 0$). By recasting (10) using the Lagrange form, rather than the bound form of the constraints on $\textbf{v}$, we get

$\min_{\textbf{u},\textbf{v}} -\textbf{u}^T\textbf{X}\textbf{v} + \frac{1}{2}\textbf{v}^T\textbf{v} + \lambda_1\sum_j|v_j| + \lambda_2\sum_j|v_j-v_{j-1}| \; \textrm{ subject } \; \textrm{ to } \; \| \textbf{u} \|_2^2 \leq 1, \; \| \textbf{u} \|_1 \leq c_1. \;\;\;(11)$

Solving (11) can be done by replacing Steps 2(a) and 2(b) in Algorithm 1 with the appropriate updates:

1. Iterate until convergence:
1. $\textbf{u} \leftarrow \frac{S(\textbf{X}\textbf{v},\Delta_1)}{\|S(\textbf{X}\textbf{v},\Delta_1)\|_2}$, where $\Delta_1 = 0$ if this results in $\| \textbf{u} \|_1 \leq c_1$; otherwise $\Delta_1$ is chosen to be a positive constant such that $\| \textbf{u} \|_1 = c_1$.
2. $\textbf{v} \leftarrow \arg \min_{\textbf{v}}\{\frac{1}{2}\|\textbf{X}^T\textbf{u}-\textbf{v}\|^2 + \lambda_1\sum_j|v_j| + \lambda_2\sum_j|v_j-v_{j-1}|\}$.

Step 2(b) can be performed using fast software implementing fused lasso regression as described in Friedman and others<ref name="F2007">Jerome Friedman, Trevor Hastie, Holger Hoefling, and Robert Tibshirani. (2007) "Pathwise coordinate optimization." Annals of Applied Statistics, 1:302–332.</ref> and Tibshirani and Wang<ref name ="TW2008">Robert Tibshirani and Pei Wang. (2008) "Spatial smoothing and hot spot detection for CGH data using the fused lasso." Biostatistics (Oxford, England), 9(1):18–29.</ref>.

The PMD algorithm can be used for missing value imputation which is related to SVD-based missing value imputation methods. for more information refer to <ref name="TCB2001" >TROYANSKAYA, O., CANTOR, M., SHERLOCK, G., BROWN, P., HASTIE, T., TIBSHIRANI, R., BOTSTEIN, D. AND ALTMAN, R. (2001). Missing value estimation methods for DNA microarrays. Bioinformatics 16, 520–525.</ref>

## Sparse PCA

PCA is a tool used for both dimensionality reduction and data analysis. The principal components derived in PCA are linear combination of the original variables that have maximum variance and also minimize the reconstruction error. However, these derived components are often linear combination of all the original variables. Since the principal components can be computed using the SVD, we can use the earlier SVD notation, to define $\textbf{u}_k$ as a principal component, $\textbf{X}$ as the data matrix and $\textbf{v}_k$ as a loading vector where $\textbf{u}_k = \textbf{X}\textbf{v}_k$. The columns of the data matrix represent the original variables so it is dense nature of the loading vectors that makes interpretation of the principal components difficult and has led to the development of sparse PCA methods. The authors present two existing methods for sparse PCA and derive a new method based on the PMD. They outline the three methods and their similarities as follows:

1. SPC: The authors propose a new sparse PCA method they label SPC. They define it using the PMD criterion with $P_2(\textbf{v}) = \|\textbf{v}\|_1$, and no $\ P_1$-constraint on $\textbf{u}$. The authors refer to this criterion as PMD($\cdot$,$L_1$), and it can be written as follows:

$\max_{\textbf{u},\textbf{v}} \textbf{u}^T\textbf{X}\textbf{v} \; \textrm{ subject } \; \textrm{ to } \; \|\textbf{v}\|_1 \leq c_2, \; \|\textbf{u}\|_2^2 \leq 1, \; \|\textbf{v}\|_2^2 \leq 1. \;\;\; (12)$

The algorithm for PMD($\cdot$,$L_1$) is obtained by replacing Step 2(a) of the single-factor PMD($L_1$, $L_1$) algorithm with $\textbf{u} \leftarrow \frac{\textbf{X}\textbf{v}}{\|\textbf{X}\textbf{v}\|_2}$.

2. SCoTLASS: Jolliffe and others<ref name="JTU2003"/> derive the SCoTLASS procedure for sparse PCA using the maximal variance property of principal components. The first sparse principal component solves the problem

$\max_{\textbf{v}} \textbf{v}^T\textbf{X}^T\textbf{X}\textbf{v} \; \textrm{ subject } \; \textrm{ to } \; \|\textbf{v} \|^2_2 \leq 1, \; \|\textbf{v} \|_1 \leq c. \;\;\; (13)$

Subsequent components solve the same problem with the additional constraint that they must be orthogonal to previous components. The authors in <ref name="WTH2009"/> argue that the SCoTLASS criterion is the simplest, most natural way to define the notation of sparse principal components; however, the problem in (13) is not convex, and therefore the computations are difficult. The authors show how SPC can easily be recast as the SCoTLASS criterion and thus argue that the efficient SPC algorithm can be used to find the first SCoTLASS component. Only the first component is the solution to the SCoTLASS criterion since the SPC method does not enforce the orthogonality constraint on subsequent components.

3. SPCA: To obtain sparse PCA, Zou and others<ref name="ZHT2006"/> use the reconstruction error property of principal components. For a single component, their sparse PCA technique solves

$\min_{\theta,\textbf{v}} \| \textbf{X} - \textbf{X}\textbf{v}\theta^T \|^2_F + \lambda_1 \|\textbf{v} \|^2_2 + \lambda_2\| \textbf{v} \|_1 \; \textrm{ subject } \; \textrm{ to } \; \| \theta \|_2 = 1, \;\;\; (14)$

where $\ \lambda_1$, $\lambda_2 \geq 0$ and $\textbf{v}$ and $\ \theta$ are $p$-vectors. By considering the bound form of (14), relaxing the $L_2$-constraint on $\theta$ and introducing a $L_1$-constraint on $\theta$, the authors in <ref name="WTH2009"/> recast the SPCA problem into the same form as the SCoTLASS problem. However, since constraints were introduced to recast the SPCA problem as the SCoTLASS criterion, not all SPCA solutions will be solutions to the SCoTLASS criterion and as the authors note this implies that SPCA will give solutions with lower reconstruction error than the SCoTLASS criterion.

After introducing the three sparse PCA methods and comparing their formulations, the authors then compare the performance of SPCA and SPC by comparing the proportion of variance explained by SPC and SPCA on a publicly available gene expression data set from http://icbp.lbl.gov/breastcancer/ and described in <ref name="C2006">Koei Chin, Sandy DeVries, Jane Fridlyand, Paul T. Spellman, Ritu Roydasgupta, Wen-Lin Kuo, Anna Lapuk, Richard M. Neve, Zuwei Qian, Tom Ryder, and others. (2006) "Genomic and transcriptional aberrations linked to breast cancer pathophysiologies." Cancer Cell, 10:529–541.</ref>. Since the sparse principal components for both methods are not orthogonal, special care must be taken in determining the the proportion of variance explained by the first $k$ sparse principal components. If the principal component vectors are not orthogonal then the information content in each of the components may overlap. As well, it is incorrect to calculate the total variance as the sum of the individual component variance if the principal components are correlated. These factors are accounted for in the method proposed by Shen and Huang<ref name="SH2008">Haipeng Shen and Jianhua Z. Huang. (2008) "Sparse principal component analysis via regularized low rank matrix approximation." J. Multivar. Anal., 99(6):1015–1034</ref> which determines the proportion of variance explained by the first $k$ sparse principal components using the formula $\textrm{tr}(\textbf{X}_k^T\textbf{X}_k)$, where $\textbf{X}_k = \textbf{X}\textbf{V}_k(\textbf{V}_k^T\textbf{V}_k)^{-1}\textbf{V}_k^T$. Using this formula, SPC results in a substantially greater proportion of variance explained than SPCA.

To complete their analysis on sparse PCA, the authors consider one additional form of the PMD algorithm. In light of the fact that SPC does not produce orthogonal components, like the SCoTLASS procedure, they develop a sparse PCA method that enforces orthogonality among the $\textbf{u}_k$ vectors (but not the $\textbf{v}_k$ vectors).

## Penalized CCA via PMD

Let $\textbf{Z}$ be an $n \times (p+q)$ data matrix where $n$ is the number of observations and $p+q$ is the number of variables. Suppose the number of variables can easily be partitioned into 2 meaningful sets to give $\textbf{Z}=[\textbf{X} \; \textbf{Y}]$ where $\textbf{X} \in \mathbb{R}^{n \times p}$ and $\textbf{Y}\in \mathbb{R}^{n \times q}$ and where the columns of $\textbf{X}$ and $\textbf{Y}$ are centered and scaled. Then it is natural to investigate whether a relationship between $\textbf{X}$ and $\textbf{Y}$ can be found. CCA, developed by Hotelling <ref name="H1936>Harold Hotelling. (1936) "Relations between two sets of variates." Biometrika, 28:321–377.</ref> involves finding the canonical variates, $\textbf{u}$ and $\textbf{v}$, that maximize $\textrm{cor}(\textbf{X}\textbf{u},\textbf{Y}\textbf{v})$ subject to $\textbf{u}^T\textbf{X}^T\textbf{X}\textbf{u} \leq 1, \; \textbf{v}^T\textbf{Y}^T\textbf{Y}\textbf{v} \leq 1$. The solution can be determined from the eigenvectors of a function of the covariance matrices of $\textbf{X}$ and $\textbf{Y}$. However, the canonical variates given by the standard CCA algorithm are not sparse. In addition, these variates are not unique if $p$ or $q$ exceeds $n$. To find sparse linear combinations of $\textbf{X}$ and $\textbf{Y}$ that are correlated, the authors add penalty constraints to the original formulation of the CCA problem. As well, the authors assume the covariance matrices of $\textbf{X}$ and $\textbf{Y}$ are equal to $\textbf{I}_n$ since diagonalizing these matrices has been shown to yield good results in other high-dimensional problems (<ref name="DFS2001">Sandrine Dudoit, Jane Fridlyand, and Terence P. Speed. (2001) "Comparison of discrimination methods for the classification of tumors using gene expression data." Journal of American Statistical Association, 96:1151–1160.</ref>, <ref name="THNC2003">Robert Tibshirani, Trevor Hastie, Balasubramanian Narasimhan, and Gilbert Chu. (2003) "Class prediction by nearest shrunken centroids, with applications to DNA microarrays." Statistical Sciences, 18(1):104–117.</ref>). This yields the following "diagonal penalized CCA":

$\max_{\textbf{u},\textbf{v}} \textbf{u}^T\textbf{X}^T\textbf{Y}\textbf{v} \; \textrm{ subject } \; \textrm{ to } \;\| \textbf{u} \|^2_2 \leq 1, \; \| \textbf{v} \|^2_2 \leq 1, \; P_1(\textbf{u}) \leq c_1, \; P_2(\textbf{v}) \leq c_2. \;\;\; (15)$

If $\textbf{X}^T\textbf{Y}$ is replaced by $\textbf{X}$ then (15) becomes the original PMD formulation (6) and thus we can solve (15) using Algorithm 1. Following the authors earlier notation, this method is denoted PMD($A$,$B$) where $A$ is the penalty on $\textbf{u}$ and $B$ is the penalty on $\textbf{v}$.

The authors demonstrate the PMD($L_1$,$L_1$) method on a simple simulated example in which classical CCA cannot be applied since $p, q \gt n$. Two sparse, orthogonal, latent factors are used to generate $\textbf{X}$ and $\textbf{Y}$. For comparison, the authors also computed the SVD of $\textbf{X}^T\textbf{Y}$. Compared to the SVD, PMD($L_1$, $L_1$) does fairly well at identifying linear combinations of the underlying factors.

The authors use genomic data as their main motivation for introducing penalized CCA. They argue that since multiple assays are often used to characterize a single set of samples, some measure is needed to perform inference across the different data sets. As an example, they consider the breast cancer data set, publicly available at http://icbp.lbl.gov/breastcancer and described in <ref name="C2006"/>, where both CGH and gene expression data are available for a set of cancer samples. In this case, the goal is to identify a set of genes that have expression correlated with a set of chromosomal gains or losses. The version of PMD used in this case is PMD($L_1$, FL), where the $L_1$-penalty is applied to the canonical variate corresponding to genes and the fused lasso penalty is applied to the canonical variate corresponding to copy number. PMD($L_1$, FL) is performed once for each of the 23 chromosomes and nonzero canonical variates were found for all chromosomes except for chromosome 2. It is clear that PMD($L_1$, FL) resulted in both sparsity and smoothness of the $\textbf{v}$ vectors. In order to asses whether P($L_1$,FL) is capturing real structure in the breast cancer data, the authors computed $p$-values for the penalized canonical variates. For each chromosome, a $p$-value for the penalized canonical variates was computed using permuted versions of $\textbf{X}$ and for every chromosome, except chromosome 2, the $p$-values were found to be significant. As an additional test on the resulting canonical variates, the authors use a training set/test set approach. They divide the sample into a training set $(\textbf{X}_{tr},\textbf{Y}_{tr})$ and test set $(\textbf{X}_{te},\textbf{Y}_{te})$ where the test set contains about 3/4 of the entire sample. Using the training set, they calculate the canonical variates from penalized CCA and then compute $\textrm{cor}(\textbf{X}_{tr}\textbf{u}_{tr},\textbf{Y}_{tr}\textbf{v}_{tr})$ and $\textrm{cor}(\textbf{X}_{te}\textbf{u}_{tr},\textbf{Y}_{te}\textbf{v}_{tr})$. They repeat this experiment multiple times drawing different training sets each time. For most chromosomes, the correlation in the test set is quite high given that the average value should be zero in the absence of a signal.

<references />