# Introduction

Principal component analysis (PCA) is a useful tool in statistical learning, which tries to preserve the variability by a small number of principal components. In the classical method, the principal components are chosen as the eigenvectors corresponding to the top several largest eigenvalues of the covariance matrix. Since the classical estimation for covariance matrix is very sensitive to the presence of outliers, it is not surprising that the principal components are also attracted toward outlying points very easily, and no longer reflect the variation of regular data points correctly.

To overcome this drawback, two types of modification are proposed. The first is to simply replace the covariance matrix estimator by a robust estimator in classical PCA. Related work includes Maronna <ref>Maronna, R. A. Robust M-Estimators of Multivariate Location and Scatter. The Annals of Statistics, 4:51-67, 1976. </ref>, Campbell <ref>Campbell, N. A. Robust Procedures in Multivariate Analysis I: Robust Covariance Estimation. Applied Statistics, 29:231-237, 1980. </ref> and Croux and Haesbroeck <ref>Croux, C. and Haesbroeck, G. Principal Components Analysis based on Robust Estimators of the Covariance or Correlation matrix: Influence Functions and Efficiencies. Biometrika, 87:603-618, 2000. </ref>. But these methods only work nicely when the data are not in high-dimensional space, and the computation cost for these robust estimators will become a serious issue when dimension increases (can only handle up to about 100 dimensions).

The second way is to use projection pursuit (PP) techniques (See Li and Chen <ref>Li, G., and Chen, Z. Projection-Pursuit Approach to Robust Dispersion Matrices and Principal Components: Primary Theory and Monte Carlo. Journal of the American Statistical Association, 80:759-766, 1985. </ref>, Croux and Ruiz-Gazen <ref>Croux, C., and Ruiz-Gazen, A. A Fast Algorithm for Robust Principal Components Based on Projection Pursuit. COMPSTAT 1996, Proceedings in Computational Statistics, ed. A. Prat, Heidelberg: Physica-Verlag, 211-217, 1996. </ref>). PP obtains the robust principal components by maximize a robust measure of spread.

The authors proposed a new approach called ROBPCA, which combines the idea of PP and robust scatter matrix estimation. ROBPCA can be computed efficiently, and is able to detect exact-fit situations. Also, it can be used as a diagnostic plot that detects the outliers.

# ROBPCA

The ROBPCA roughly consists of a three step algorithm. First, the data are transformed into a subspace whose dimension is at most $n-1$. Second, a preliminary covariance matrix is constructed and used for selecting a $k_{0}$-dimensional subspace that fits the data well. The final step is to project the data into the selected subspace where their location and scatter matrix are robustly estimated, until getting the final score in the $k$-dimensional subspace.

Notations:

$\mathbf{X}_{n,p}$: The observed data, $n$ objects and $p$ variables.

$\widehat{\mu}_{0}^{\prime}$: mean vector of $\mathbf{X}_{n,p}$.

$k$: the dimension of low-dimensional subspace into which the data are projected.

$r_{0}$: Rank of $\mathbf{X}_{n,p}-1_{n}\widehat{\mu}_{0}^{\prime}$.

$\alpha$: tuning parameter that represents the robustness of the procedure.

$t_{MCD}$ and $s_{MCD}$: MCD location and scale estimator<ref>Rousseeuw, P. J. Least Median of Squares Regression. Journal of the American Statistical Association, 79:871-880, 1984</ref>

## Detailed ROBPCA algorithm

Step 1

ROBPCA starts with finding a affine subspace spanned by n data points (as propose by Hubert et al. <ref name="HR">Hubert, M., Rousseeuw, P. J., and Verboven, S. A Fast Method for Robust Principal Components With Applications to Chemometrics. Chemometrics and Intelligent Laboratory Systems, 60:101-111, 2002. </ref>). This is done by performing the SVD:

$\,\mathbf{X}_{n,p}-1_{n}\widehat{\mu}_{0}^{\prime}=U_{n,r_{0}}D_{r_{0},r_{0}}V_{r_{0},p}^{\prime}$

Without losing any information, we can now work in the subspace spanned by the $r_{0}$ columns of $V$. Thus, $\,\mathbf{Z}_{n,r_{0}}=UD$ becomes the new data matrix.

Step 2

The second step is to find a subset of $h\lt n$ least outlying data points, and use their covariance matrix to obtain a subspace of dimension $k_{0}$. The value of $h$ is chosen as

$h=\max \left\{ \alpha n, (n+k_{max}+1)/2 \right\}$

where $k_{max}$ represents the maximal number of components that will be computed.

Then the subset of least outlying data points is found as the following:

1. For each data point $\mathbf{x}_{i}$ and each direction $\mathbf{v}$, the orthogonally invariant outlyingness is computed:

$outl_{O}(\mathbf{x}_{i})=\max_{\mathbf{v}} \frac{\left| \mathbf{x}_{i}^{\prime}\mathbf{v}-t_{MCD}(\mathbf{x}_{j}^{\prime}\mathbf{v}) \right|}{s_{MCD}(\mathbf{x}_{j}^{\prime}\mathbf{v})}$

For a direction $\mathbf{v}$ such that $s_{MCD}(\mathbf{x}_{j}^{\prime}\mathbf{v})=0$, we found a hyperplane orthogonal to $\mathbf{v}$ that contains $h$ observations, therefore reducing the dimension by one.

Repeat searching until we end up with a dataset in some lower-dimensional space and a set $H_{0}$ indexing the $h$ data points with smallest outlyingness.

2. Compute the empirical mean $\widehat{\mu}_{1}$ and covariance matrix $S_{0}$ of $h$ points in $H_{0}$. Perform the spectral decomposition of $S_{0}$.

3. Project the data points on the subspace spanned by the first $k_{0}$ eigenvectors of $S_{0}$, and get the new dataset $\mathbf{X}_{n,k_{0}}^{\star}$

Step 3

The mean and covariance matrix of $\mathbf{X}_{n,k_{0}}^{\star}$ are robustly estimated by FAST-MCD algorithm<ref name="HR">Reference</ref>, and during the iteration procedure, one can keep reducing the dimensionality when the covariance matrix is found to be singular.

Repeating the FAST-MCD until getting the final dataset $\mathbf{X}_{n,k} \in \mathbb{R}^{k}$, and the scores $\mathbf{T}_{n,k}$:

$\mathbf{T}_{n,k}=(\mathbf{X}_{n,k}-1_{n}\widehat{mu}_{k}^{\prime})\mathbf{P}$

Finally, $\mathbf{P}$ is transformed back into $\mathbb{R}^{p}$ to obtain the robust principal components $\mathbf{P}_{p,k}$ such that

$\mathbf{T}_{n,k}=(\mathbf{X}_{n,p}-1_{n}\widehat{mu}^{\prime})\mathbf{P}_{p,k}$

Moreover, a robust scatter matrix $\mathbf{S}$ of rank k is also generated by

$\mathbf{S}=\mathbf{P}_{p,k}\mathbf{L}_{k,k}\mathbf{P}_{p,k}{\prime}$

where $\mathbf{L}_{k,k}$ is the diagonal matrix with eigenvalues $l_{1},\cdots,l_{k}$

Remarks

1. Step 1 is useful especially when the number of variables are larger than the sample size ($p\gt n$)

2. In step 2, the choice of $\alpha$ reflects the trade-off between efficiency and robustness, i.e. the higher the $\alpha$, the more efficient the estimates will be for uncontaminated data, and the lower the $\alpha$, the more robust the estimator will be for contaminated samples.

3. Unlike some other robust PCA method, ROBPCA shares a very nice property with classical PCA: it is location and orthogonal equivariant.

## Diagnostic

The ROBPCA can be also used to flag the outliers in the sample. A diagnostic plot can be constructed as following:

1.

# Example and Simulations

The performances of ROBPCA and the diagnostic plot are illustrated by some real data example and simulation studies. The comparison is carried out between ROBPCA and other four types of PCA: classical PCA (CPCA), RAPCA<ref name="HR">Reference</ref>, spherical PCA (SPHER) and ellipsoidal PCA (ELL)<ref>Locantore, N., Marron, J. S., Simpson, D. G., Tripoli, N., Zhang, J. T., and Cohen, K. L. Robust Principal Component Analysis for Functional Data. Test, 8:1-73, 1999</ref>, where the last three methods are also designed to be robust for high-dimensional data.

<references />