# Difference between revisions of "matrix Completion with Noise"

## Introduction

Nowadays, in many well-studied applications, we may face a situation that a few entries of a data matrix are observed, and our task highly depends on the accurate recovery of the original matrix. We are curious to find out if this is possible, and if yes, how accurate it can be performed.

In the current paper <ref name=""> </ref>, Candes and Plan, discuss these questions. They review the novel literature about recovery of a low-rank matrix with an almost minimal set of entries by solving a simple nuclear-norm minimization problem.

They also present results indicating that matrix completion and the original unknown matrix recovery are provably accurate even when small amount of noise is present and corrupts the few observed entries. The error of the recovery task is proportional to the noise level when the number of noisy samples is about $nr\log^{2}{n}$, in which $n$ and $r$ are the matrix dimension and rank, respectively.

## Notation

In this section, the notations used for the whole paper are introduced. Three matrix norms of spectral, Frobenius, and nuclear norms of matrix $X \in \mathbb{R}^{n1\times n2}$ with singular values of $\{ \sigma_k \}$ are used frequently, and are denoted by $\parallel X \parallel$, $\parallel X \parallel_F$, and $\parallel X \parallel_* := \Sigma_k \sigma_k$, respectively.

Also, the operators for linear transformation on $\mathbb{R}^{n1 \times n2}$ are denoted by calligraphic letters, for instance, identity operator an this space is shown by $\mathcal{I}: \mathbb{R}^{n1 \times n2} \to \mathbb{R}^{n1 \times n2}$

<references />