matrix Completion with Noise: Difference between revisions

From statwiki
Jump to navigation Jump to search
No edit summary
Line 14: Line 14:


==Exact Matrix Completion==
==Exact Matrix Completion==
Given a subset of the complete set of Matrix <math>M</math> entries <math> (M \in \mathbb{R}^{n1 \times n2})</math>, we intend to recover this matrix as accurately as possible. The available information about <math>M</math> is shown by <math>\mathcal{P}_\Omega(M)</math>.
<center><math> [\mathcal{P}_\Omega(X)]_{ij} = \Bigg\{  X_{ij}, \; (i,j) \in \Omega, </math><br /><br />
<math> 0 , \; \textrm{otherwise}. </math></center>
The discussed problem in this paper is whether the matrix can be recovered based on the given information.
It is worth noting that the cases in which a whole row or column is missed should be avoided. So, the entries are assumed to be sampled at random without replacements. Also, if the singular vectors of the given matrix are so sparse, there is no hope to recover the original matrix accurately.
 
The authors consider a simple notation (conditions) to avoid these cases, and guarantee that the singular vectors of the matrix are spread across all coordinates. Assuming the SVD of matrix <math>M =\sum_{k \in [r]} \sigma_k u_k v^{*}_{k} </math>, where <math>\sigma_{i}</math>s are singular values, and <math>u_{i}s (\in \mathbb{R}^{n1})</math>, and <math>v_{i}</math>s <math>(\in \mathbb{R}^{n2})</math> are singular vectors. They assume that <math>\parallel u_k \parallel_{l_{\infty}} \leq \sqrt{\mu_B / n1}</math> and <math>\parallel v_k \parallel_{l_{\infty}} \leq \sqrt{\mu_B / n}</math>. While <math>\mu \geq 1</math> and it is small. With sufficiently spread singular vectors, we hope to find a unique low-rank matrix satisfying the data constraints. 
The recovery is performed by solving the following optimization problem.
<center><math>\textrm{minimize}\; \textrm{rank(X)} \; </math><br /><br />
<math>\textrm{s.t.} \; \mathcal{P}_\Omega(X)= \mathcal{P}_\Omega(M)</math><br /><br /></center>
Nuclear norm minimization is the tightest convex relaxation for the rank minimization problem (above) which is NP-hard.
<center><math>\textrm{minimize}\; \parallel X \parallel_* \; </math><br /><br />
<math>\textrm{s.t.} \; \mathcal{P}_\Omega(X)= \mathcal{P}_\Omega(M)</math><br /><br /></center>
Noting the fact that spectral norm is dual to the nuclear norm, and comparing the LP characterization of <math>l_1</math> norm and SDP characterization of nuclear norm, the authors conclude that the above optimization problem is an SDP. It is shown by Candes and Tao <ref name="ref12"> </ref> that this minimization problem can perform the recovery if it can be possible by any other method.
The authors mention a theorem (from <ref name="ref12"> </ref>): if the matrix <math>M</math> is of rank <math>r=O(1)</math> and <math>m</math> entries are observed, there is a positive constant <math>C</math> that if <math>m \geq C \mu^{4}_{B} n \log^{2}n </math>, then <math>M</math> is the unique solution of the optimization problem mentioned above with a high probability.
In order to obtain similar results for other values of rank, instead of conditions mentioned before (for guaranteeing the singular vectors to be spread all across the coordinates), Candes and Tao <ref name="ref12"> </ref> present the strong incoherence property. It is shown that incoherent matrix can be recovered from a minimal set of entries, while having a small strong incoherence parameter <math>\mu</math>.
Consequently, theorem 2 by candes and Tao states that following the same notations as in theorem 1, there is a constant <math>C</math> that if <math> m \geq C\mu^{2}nrlog^{6}n</math>, with high probability <math>M</math> is a unique solution to the norm minimization problem, mentioned above.


==Stable Matrix Completion==
==Stable Matrix Completion==

Revision as of 19:29, 18 November 2010

Introduction

Nowadays, in many well-studied applications, we may face a situation that a few entries of a data matrix are observed, and our task highly depends on the accurate recovery of the original matrix. We are curious to find out if this is possible, and if yes, how accurate it can be performed.

In the current paper <ref name=""> </ref>, Candes and Plan, discuss these questions. They review the novel literature about recovery of a low-rank matrix with an almost minimal set of entries by solving a simple nuclear-norm minimization problem.

They also present results indicating that matrix completion and the original unknown matrix recovery are provably accurate even when small amount of noise is present and corrupts the few observed entries. The error of the recovery task is proportional to the noise level when the number of noisy samples is about [math]\displaystyle{ nr\log^{2}{n} }[/math], in which [math]\displaystyle{ n }[/math] and [math]\displaystyle{ r }[/math] are the matrix dimension and rank, respectively.

Notation

In this section, the notations used for the whole paper are introduced. Three matrix norms of spectral, Frobenius, and nuclear norms of matrix [math]\displaystyle{ X \in \mathbb{R}^{n1\times n2} }[/math] with singular values of [math]\displaystyle{ \{ \sigma_k \} }[/math] are used frequently, and are denoted by [math]\displaystyle{ \parallel X \parallel }[/math], [math]\displaystyle{ \parallel X \parallel_F }[/math], and [math]\displaystyle{ \parallel X \parallel_* := \Sigma_k \sigma_k }[/math], respectively.

Also, the operators for linear transformation on [math]\displaystyle{ \mathbb{R}^{n1 \times n2} }[/math] are denoted by calligraphic letters, for instance, identity operator an this space is shown by [math]\displaystyle{ \mathcal{I}: \mathbb{R}^{n1 \times n2} \to \mathbb{R}^{n1 \times n2} }[/math]


Exact Matrix Completion

Given a subset of the complete set of Matrix [math]\displaystyle{ M }[/math] entries [math]\displaystyle{ (M \in \mathbb{R}^{n1 \times n2}) }[/math], we intend to recover this matrix as accurately as possible. The available information about [math]\displaystyle{ M }[/math] is shown by [math]\displaystyle{ \mathcal{P}_\Omega(M) }[/math].

[math]\displaystyle{ [\mathcal{P}_\Omega(X)]_{ij} = \Bigg\{ X_{ij}, \; (i,j) \in \Omega, }[/math]

[math]\displaystyle{ 0 , \; \textrm{otherwise}. }[/math]

The discussed problem in this paper is whether the matrix can be recovered based on the given information.

It is worth noting that the cases in which a whole row or column is missed should be avoided. So, the entries are assumed to be sampled at random without replacements. Also, if the singular vectors of the given matrix are so sparse, there is no hope to recover the original matrix accurately.

The authors consider a simple notation (conditions) to avoid these cases, and guarantee that the singular vectors of the matrix are spread across all coordinates. Assuming the SVD of matrix [math]\displaystyle{ M =\sum_{k \in [r]} \sigma_k u_k v^{*}_{k} }[/math], where [math]\displaystyle{ \sigma_{i} }[/math]s are singular values, and [math]\displaystyle{ u_{i}s (\in \mathbb{R}^{n1}) }[/math], and [math]\displaystyle{ v_{i} }[/math]s [math]\displaystyle{ (\in \mathbb{R}^{n2}) }[/math] are singular vectors. They assume that [math]\displaystyle{ \parallel u_k \parallel_{l_{\infty}} \leq \sqrt{\mu_B / n1} }[/math] and [math]\displaystyle{ \parallel v_k \parallel_{l_{\infty}} \leq \sqrt{\mu_B / n} }[/math]. While [math]\displaystyle{ \mu \geq 1 }[/math] and it is small. With sufficiently spread singular vectors, we hope to find a unique low-rank matrix satisfying the data constraints.

The recovery is performed by solving the following optimization problem.

[math]\displaystyle{ \textrm{minimize}\; \textrm{rank(X)} \; }[/math]

[math]\displaystyle{ \textrm{s.t.} \; \mathcal{P}_\Omega(X)= \mathcal{P}_\Omega(M) }[/math]


Nuclear norm minimization is the tightest convex relaxation for the rank minimization problem (above) which is NP-hard.

[math]\displaystyle{ \textrm{minimize}\; \parallel X \parallel_* \; }[/math]

[math]\displaystyle{ \textrm{s.t.} \; \mathcal{P}_\Omega(X)= \mathcal{P}_\Omega(M) }[/math]


Noting the fact that spectral norm is dual to the nuclear norm, and comparing the LP characterization of [math]\displaystyle{ l_1 }[/math] norm and SDP characterization of nuclear norm, the authors conclude that the above optimization problem is an SDP. It is shown by Candes and Tao <ref name="ref12"> </ref> that this minimization problem can perform the recovery if it can be possible by any other method.

The authors mention a theorem (from <ref name="ref12"> </ref>): if the matrix [math]\displaystyle{ M }[/math] is of rank [math]\displaystyle{ r=O(1) }[/math] and [math]\displaystyle{ m }[/math] entries are observed, there is a positive constant [math]\displaystyle{ C }[/math] that if [math]\displaystyle{ m \geq C \mu^{4}_{B} n \log^{2}n }[/math], then [math]\displaystyle{ M }[/math] is the unique solution of the optimization problem mentioned above with a high probability.

In order to obtain similar results for other values of rank, instead of conditions mentioned before (for guaranteeing the singular vectors to be spread all across the coordinates), Candes and Tao <ref name="ref12"> </ref> present the strong incoherence property. It is shown that incoherent matrix can be recovered from a minimal set of entries, while having a small strong incoherence parameter [math]\displaystyle{ \mu }[/math].

Consequently, theorem 2 by candes and Tao states that following the same notations as in theorem 1, there is a constant [math]\displaystyle{ C }[/math] that if [math]\displaystyle{ m \geq C\mu^{2}nrlog^{6}n }[/math], with high probability [math]\displaystyle{ M }[/math] is a unique solution to the norm minimization problem, mentioned above.

Stable Matrix Completion

Experiments

References

<references />