# matrix Completion with Noise

## Introduction

Nowadays, in many well-studied applications, we may face a situation that a few entries of a data matrix are observed, and our task highly depends on the accurate recovery of the original matrix. We are curious to find out if this is possible, and if yes, how accurate it can be performed.

In the current paper <ref name=""> </ref>, Candes and Plan, discuss these questions. They review the novel literature about recovery of a low-rank matrix with an almost minimal set of entries by solving a simple nuclear-norm minimization problem.

They also present results indicating that matrix completion and the original unknown matrix recovery are provably accurate even when small amount of noise is present and corrupts the few observed entries. The error of the recovery task is proportional to the noise level when the number of noisy samples is about $nr\log^{2}{n}$, in which $n$ and $r$ are the matrix dimension and rank, respectively.

## Notation

In this section, the notations used for the whole paper are introduced. Three matrix norms of spectral, Frobenius, and nuclear norms of matrix $X \in \mathbb{R}^{n1\times n2}$ with singular values of $\{ \sigma_k \}$ are used frequently, and are denoted by $\parallel X \parallel$, $\parallel X \parallel_F$, and $\parallel X \parallel_* := \Sigma_k \sigma_k$, respectively.

Also, the operators for linear transformation on $\mathbb{R}^{n1 \times n2}$ are denoted by calligraphic letters, for instance, identity operator an this space is shown by $\mathcal{I}: \mathbb{R}^{n1 \times n2} \to \mathbb{R}^{n1 \times n2}$

## Exact Matrix Completion

Given a subset of the complete set of Matrix $M$ entries $(M \in \mathbb{R}^{n1 \times n2})$, we intend to recover this matrix as accurately as possible. The available information about $M$ is shown by $\mathcal{P}_\Omega(M)$.

$[\mathcal{P}_\Omega(X)]_{ij} = \Bigg\{ X_{ij}, \; (i,j) \in \Omega,$

$0 , \; \textrm{otherwise}.$

The discussed problem in this paper is whether the matrix can be recovered based on the given information.

It is worth noting that the cases in which a whole row or column is missed should be avoided. So, the entries are assumed to be sampled at random without replacements. Also, if the singular vectors of the given matrix are so sparse, there is no hope to recover the original matrix accurately.

The authors consider a simple notation (conditions) to avoid these cases, and guarantee that the singular vectors of the matrix are spread across all coordinates. Assuming the SVD of matrix $M =\sum_{k \in [r]} \sigma_k u_k v^{*}_{k}$, where $\sigma_{i}$s are singular values, and $u_{i}s (\in \mathbb{R}^{n1})$, and $v_{i}$s $(\in \mathbb{R}^{n2})$ are singular vectors. They assume that $\parallel u_k \parallel_{l_{\infty}} \leq \sqrt{\mu_B / n1}$ and $\parallel v_k \parallel_{l_{\infty}} \leq \sqrt{\mu_B / n}$. While $\mu \geq 1$ and it is small. With sufficiently spread singular vectors, we hope to find a unique low-rank matrix satisfying the data constraints.

The recovery is performed by solving the following optimization problem.

$\textrm{minimize}\; \textrm{rank(X)} \;$

$\textrm{s.t.} \; \mathcal{P}_\Omega(X)= \mathcal{P}_\Omega(M)$

Nuclear norm minimization is the tightest convex relaxation for the rank minimization problem (above) which is NP-hard.

$\textrm{minimize}\; \parallel X \parallel_* \;$

$\textrm{s.t.} \; \mathcal{P}_\Omega(X)= \mathcal{P}_\Omega(M)$

Noting the fact that spectral norm is dual to the nuclear norm, and comparing the LP characterization of $l_1$ norm and SDP characterization of nuclear norm, the authors conclude that the above optimization problem is an SDP. It is shown by Candes and Tao <ref name="ref12"> </ref> that this minimization problem can perform the recovery if it can be possible by any other method.

The authors mention a theorem (from <ref name="ref12"> </ref>): if the matrix $M$ is of rank $r=O(1)$ and $m$ entries are observed, there is a positive constant $C$ that if $m \geq C \mu^{4}_{B} n \log^{2}n$, then $M$ is the unique solution of the optimization problem mentioned above with a high probability.

In order to obtain similar results for other values of rank, instead of conditions mentioned before (for guaranteeing the singular vectors to be spread all across the coordinates), Candes and Tao <ref name="ref12"> </ref> present the strong incoherence property. It is shown that incoherent matrix can be recovered from a minimal set of entries, while having a small strong incoherence parameter $\mu$.

Consequently, theorem 2 by candes and Tao states that following the same notations as in theorem 1, there is a constant $C$ that if $m \geq C\mu^{2}nrlog^{6}n$, with high probability $M$ is a unique solution to the norm minimization problem, mentioned above.

<references />