compressive Sensing (Candes): Difference between revisions

From statwiki
Jump to navigation Jump to search
No edit summary
m (Conversion script moved page Compressive Sensing (Candes) to compressive Sensing (Candes): Converting page titles to lowercase)
 
(29 intermediate revisions by 5 users not shown)
Line 4: Line 4:
This minimum number of samples, or rather the minimum rate at which the signal must be sampled, is called the [http://en.wikipedia.org/wiki/Nyquist_rate Nyquist rate].
This minimum number of samples, or rather the minimum rate at which the signal must be sampled, is called the [http://en.wikipedia.org/wiki/Nyquist_rate Nyquist rate].


So, we have been stuck with either following the Nyquist criterion, or else being left to deal with imprecise results, corrupted with artefacts like aliasing.
So, we have been stuck with either following the Nyquist criterion, or else being left to deal with imprecise results, corrupted with artifacts like aliasing.
This is particularly problematic when sampling is very expensive.
This is particularly problematic when sampling is very expensive.
For example, what if the measurement device is expensive?
For example, what if the measurement device is expensive?
Or what if the time requirements are particularly expensive, as in the case of MRI?
Or what if the time requirements are particularly expensive, as in the case of MRI?


Recently, work by Emmanuel Candes and others demonstrated that it is possible to exactly reconstruct some signals with undersampled data.
Recently, work by Emmanuel Candes and others <ref name="CandesWakin">E. Candes and M. Wakin. An introduction to compressive sampling. IEEE Signal Processing Magazine,25(2):21-30,2008</ref><ref name="CandesTao">E. Candes and T. Tao. Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? IEEE Transactions on Information Theory, 52(12):5406-5425, 2006.</ref> demonstrated that it is possible to exactly reconstruct some signals with undersampled data.
If a signal is sparse, meaning that it does not have many features, we can take far fewer measurements and utilize knowledge of the structure of the signal to infer the rest.
If a signal is sparse, meaning that it does not have many features, we can take far fewer measurements and utilize knowledge of the structure of the signal to infer the rest.
And, it turns out, the results are astonishingly good; we can exactly reconstruct sparse signals with far fewer measurements than were previously thought possible.
And, it turns out, the results are astonishingly good; we can exactly reconstruct sparse signals with far fewer measurements than were previously thought possible.
The following summarizes the work presented in <ref name="CandesWakin" />.


= Formulation =
= Formulation =
A sampled signal can be modeled as,
A sampled signal can be modeled as


:<math>y = \Phi x</math>
:<math>y = \Phi x</math>


Where <math>y</math> is an <math>m</math> dimensional vector of sampled values, <math>x</math> is an <math>n</math> dimensional vector of the exact signal, and <math>\Phi</math> is a <math>m</math> by <math>n</math> matrix which samples the exact signal.
where <math>y</math> is an <math>m</math> dimensional vector of sampled values, <math>x</math> is an <math>n</math> dimensional vector of the exact signal, and <math>\Phi</math> is a <math>m</math> by <math>n</math> matrix which samples the exact signal.
An example of this could be <math>y</math> as the CCD sensor measurements in a digital camera, <math>\Phi</math> the acquisition process, and <math>x</math> the real world.
An example of this could be <math>y</math> as the CCD sensor measurements in a digital camera, <math>\Phi</math> the acquisition process, and <math>x</math> the real world.
If the signal is well-posed, meaning we've done a sufficient amount of sampling, we'll be able to reconstruct <math>x</math> perfectly.
If the signal is well-posed, meaning we've done a sufficient amount of sampling, we'll be able to reconstruct <math>x</math> perfectly.
Line 24: Line 26:
This becomes an ill-posed or under-determined problem.
This becomes an ill-posed or under-determined problem.


Now, consider that many signals can be summarized succinctly in some other basis---ie, the problem of compression or dimensionality reduction.
Now, consider that many signals can be summarized succinctly in some other basis - that is, the problem of compression or dimensionality reduction.
Returning to the camera example, the camera may apply a discrete cosine transform (DCT) for JPEG compression in order to reduce the amount of space required to store the image.
Returning to the camera example, the camera may apply a discrete cosine transform (DCT) for JPEG compression in order to reduce the amount of space required to store the image.
So, we know it is possible to compress sparse signals, but only once we have a sufficiently large number of samples.
So, we know it is possible to compress sparse signals, but only once we have a sufficiently large number of samples.
Line 30: Line 32:
Compressive sensing combines these two stages, sensing (sampling) and compression, in order to solve this under-determined problem.
Compressive sensing combines these two stages, sensing (sampling) and compression, in order to solve this under-determined problem.
It asks, if we can compress our exact signal <math>x</math> into some compressed basis <math>\Phi</math> where it can be represented sparsely as a signal <math>s</math>, then can we solve that same under-determined system if <math>m</math> is approximately equal to <math>s</math>?
It asks, if we can compress our exact signal <math>x</math> into some compressed basis <math>\Phi</math> where it can be represented sparsely as a signal <math>s</math>, then can we solve that same under-determined system if <math>m</math> is approximately equal to <math>s</math>?
If our sparse signal <math>s</math> is \textit{k-sparse}, meaning it has <math>k</math> nonzero elements, we can fix our solution for <math>s</math> in <math>k</math> dimensions, and do some kind of optimization for the remaining <math>n-k</math> elements.
If our sparse signal <math>s</math> is ''k-sparse'', meaning it has <math>k</math> nonzero elements, we can fix our solution for <math>s</math> in <math>k</math> dimensions, and do some kind of optimization for the remaining <math>n-k</math> elements.
Mathematically,
Mathematically,


Line 38: Line 40:
So, we can take some small number of samples <math>y</math>, compute the sparse representation <math>s</math> of our exact signal <math>x</math>, and then apply the inverse compression approximation to recover <math>x</math>.
So, we can take some small number of samples <math>y</math>, compute the sparse representation <math>s</math> of our exact signal <math>x</math>, and then apply the inverse compression approximation to recover <math>x</math>.


But, to do this, we must answer three broad questions:
So the two fundamental premises underlying CS: sparsity and incoherence.
 
'''Sparsity''':The implication of sparsity is now clear: when a signal has a sparse expansion, one can discard the small coefficients without much perceptual loss. Formally, consider <math>f_{S}(t)</math> obtained by keeping only the terms corresponding to the <math>S</math> largest values of <math>(x_i)</math> in the expansion <math>f(t)=\sum_{i=1}^{n}x_i\psi_i(x)</math>.By definition, <math>f_S =\Psi x_S</math>, where <math>x_S</math> is the vector of coefficients <math>(x_i)</math> with all but the largest <math>S</math> set to zero.
 
'''Incoherence''': In plain English, the coherence measures the largest correlation between any two elements of <math>\Phi</math> and <math>\Psi</math>. If <math>\Phi</math> and <math>\Psi</math> contain correlated elements, the coherence is large. Otherwise, it is small.
 
Ideally, we would like to measure all the <math>n</math> coefficients of <math>f</math>, but we only get to observe a subset of
these and collect the data .But, to do this, we must answer three broad questions:
 
* How do we pick <math>\Theta</math>?
* How do we pick <math>\Theta</math>?
* How can we do the optimization, and how can we be sure it will work?
* How can we do the optimization, and how can we be sure it will work?
Line 44: Line 54:


= Choosing a Compressive Sensing Matrix <math>\Theta</math> =
= Choosing a Compressive Sensing Matrix <math>\Theta</math> =
We now must derive some guarantee that our pick for the compressive sensing matrix will be stable.
To guarantee that that the compressive sensing matrix is stable, the authors derive the ''Restricted Isometry Property'' (RIP) in <ref name="CandesTao2">E. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans. Inform. Theory, vol. 51, no. 12, Dec. 2005.</ref>.
To do this, the authors derived the \texttt{Restricted Isometry Property} (RIP).
Mathematically this property can be stated as,
Mathematically,
 
:<math>(1-\delta_k)\|x\|_{\ell_2}^2 \le \|A x\|_{\ell_2}^2 \le (1+\delta_k)\|x\|_{\ell_2}^2</math>


:<math>(1-\delta_s)\|x\|_{\ell_2}^2 \le \|A x\|_{\ell_2}^2 \le (1+\delta_s)\|x\|_{\ell_2}^2</math>
This formula implies that <math>A</math> must be a distance preserving transformation for all k-sparse vectors <math>x</math>, bounded by some constant <math>\delta_k</math>, known as the ''restricted isometry constant''. <math>A</math> satisfies the RIP of order <math>k</math> if <math>\delta_k</math> is not too close to 1. When this property holds, all <math>k</math>-subsets of the columns of <math>A</math> are nearly orthogonal.  If this property does not hold then it is possible for a <math>k</math>-sparse signal to be in the null space of <math>A</math> and in this case it may be impossible to reconstruct these vectors.


This formula means that <math>A</math> must be a distance preserving transformation for all k-sparse vectors <math>x</math>, bounded by some constant <math>\delta_s</math>, known as the \textit{restricted isometry constant}.
Evaluating the RIP property is itself intractable (NP-hard), so while it serves as a theoretical foundation for compressive sensing, it still does not reveal a great deal in telling us how to build the compressive sensing matrix.
Evaluating this property is itself intractable (NP-hard), so while it serves as a theoretical foundation for compressive sensing, it still does not reveal a great deal in telling us how to build the compressive sensing matrix.


To do this, let's have a look at how <math>\Theta</math> is constructed.
To do this, let's have a look at how <math>\Theta</math> is constructed.
Line 63: Line 73:
:<math>\mu(\Psi, \Phi) = \sqrt{n} \max_{1 \le k, j \le n} |\langle \psi_k,\phi_j \rangle|</math>
:<math>\mu(\Psi, \Phi) = \sqrt{n} \max_{1 \le k, j \le n} |\langle \psi_k,\phi_j \rangle|</math>


So, to find a suitable <math>\Theta</math>, we'll need to find two bases which are incoherent.
Note that this function takes on values between 1 and <math>\sqrt{n}</math>. So, to find a suitable <math>\Theta</math>, we'll need to find two bases which are incoherent.
 
We know <math>\Phi</math> will model the sampling phase and <math>\Psi</math> the compression, so we must at least select matrices with these properties.
We know <math>\Phi</math> will model the sampling phase and <math>\Psi</math> the compression, so we must at least select matrices with these properties.
Based on the definition of coherence, spikes and sinusoids will be incoherent, so we could choose <math>\Phi</math> as a matrix of impulses and <math>\Psi</math> as a compression basis of sinusoids.
Based on the definition of coherence, spikes and sinusoids will be incoherent, so we could choose <math>\Phi</math> as a matrix of impulses and <math>\Psi</math> as a compression basis of sinusoids.
Another option could be noiselets and wavelets for the sensing and compression, respectively.
Another option could be noiselets and wavelets for the sensing and compression, respectively.
But must we be so restricted?
But must we be so restricted?
It turns out that random noise is incoherent with just about everything.
 
So, we can take iid samples from some random distribution (ie a Gaussian with mean of zero), and use that as our sampling matrix.
It turns out that random noise is incoherent with just about everything. Namely, in most cases, taking iid samples from some random distribution (i.e. a Gaussian with mean of zero or uniformally from a unit sphere in <math>\mathbb{R}^m</math>) will result in a matrix violating the RIP property with exponentially small (in ''m'') probability. Thus, using these sorts of sensing matrices (or others than we can prove have the same properties) are ideal to use our sampling matrix.


Now, we know how to construct a stable compressive sensing matrix <math>\Theta</math>, but how do we actually solve the under-determined system?
Now, we know how to construct a stable compressive sensing matrix <math>\Theta</math>, but how do we actually solve the under-determined system?
Line 77: Line 89:
It's a convenient choice since it has a closed-form solution, but it turns out to give very bad results in this context.
It's a convenient choice since it has a closed-form solution, but it turns out to give very bad results in this context.


What about the <math>l_0</math> norm?
What about the <math>\ell_0</math> norm?
Intuitively, this counts the number of zeros in the vector---that's exactly what we want to minimize!
Intuitively, this counts the number of zeros in the vector - that's exactly what we want to minimize!
But, it turns out that we'd have to try every <math>{n \choose k}</math> combination of zeros, which is NP-hard, and thus intractable.  
But, it turns out that we'd have to try every <math>{n \choose k}</math> combination of zeros to find the solution, which is NP-hard, and thus intractable.  
The authors discovered that we can solve this using <math>l_1</math> norm and obtain exact results.
The authors discovered that we can solve this using the <math>\ell_1</math> norm and obtain exact results. They concluded that the reconstruction is nearly as good as what an oracle can provide us by a perfect knowledge of the object.
<br /><br />
'''Figure 1:''' illustrates the exact result given by the <math>\ell_1</math> compared to the inaccuracy of, for example, the <math>\ell_2</math> norm. [[File:normsgd.jpg|center|'''Figure 1''': Comparison of <math>\ell_1</math> and <math>\ell_2</math> norms for recovering signals. Diagram from <ref name="CandesWakin" />]]
 
So the problem we are trying to solve can be stated formally as:
 
:<math>\min_{s \in \mathbb{R}^n} \|s\|_1 \quad \mbox{subject to} \quad y = \Theta s</math>.
 
This is a well-established problem known as ''basis pursuit''.  Basis pursuit problems can easily be transformed into a linear programming problem.  Suppose we let every <math>s_i = s_i^+-s_i^-</math> where both <math>s_i^+</math> and <math>s_i^-</math> are non-negative. Then <math>|s_i| = s_i^++s_i^-</math> and the optimization problem can be rewritten as


:<math>\min_x \|x\|_1 \quad \mbox{subject to} \quad y = \Phi x</math>.
:<math>\min_{s^+,s^- \in \mathbb{R}^n} \sum_i s_i^++s_i^ \quad \mbox{subject to} \quad y = \Theta s^+ + \Theta s^- \mbox{ and } s^+, s^- \geq 0</math>.


This is a well-established technique called \textit{basis pursuit}, which gives exact results for our solution.
This problem can be solved using the simplex method (linear programming), and has a worst-case complexity of <math>O(n^3)</math>.
Can be solved using the simplex method (linear programming), and has a worst-case complexity of <math>O(n^3)</math>.


Now, we know how to construct and solve this system.
Now, we know how to construct and solve this system.
Line 96: Line 115:
The minimum bound of <math>m</math>, the number of samples, is given by,
The minimum bound of <math>m</math>, the number of samples, is given by,


<math>m \geq C K \log(\frac{N}{K})</math>
:<math>m \geq C K \log(\frac{N}{K})</math>


if the bases are maximally incoherent, for example if we use random projections.
if the bases are maximally incoherent, for example if we use random projections.
Empirically, the typical number of samples has been found to be approximately <math>m \ge 4k</math> in practice.
In practice, the typical number of samples has been found to be approximately <math>m \ge 4k</math>.


However, even if <math>m</math> is sufficiently small, this isn't practical if the method is not robust.
However, even if <math>m</math> is sufficiently small, this isn't practical if the method is not robust.
Let's consider our basic formulation, and add some error,
Let's consider our basic formulation, and add some error,


:<math>y = \Theta s + z</math>
:<math>y = \Theta s + z,</math>


Where <math>z</math> is some random noise, for example Gaussian white noise <math>z \sim N(0,\sigma^2)</math>.
where <math>z</math> is some random noise, for example Gaussian white noise <math>\,z \sim N(0,1/m)</math>.
To solve this, let's relax the original <math>l_1</math> formulation to
To solve this, let's relax the original <math>l_1</math> formulation to


:<math>\min_\tilde{x} \|\tilde{x}\|_{l_1} \quad \mbox{subject to} \quad \| A\tilde{x} - y \| \le \epsilon</math>,
:<math>\min_\tilde{s} \|\tilde{s}\|_{l_1} \quad \mbox{subject to} \quad \| \Theta \tilde{s} - y \| \le \epsilon</math>,


where <math>\epsilon</math> is the error bound.
where <math>\epsilon</math> is the error bound.
This is closely related to the [http://en.wikipedia.org/wiki/Lasso_(statistics)#LASSO_method Least Absolute Shrinkage and Selection Operator] (LASSO) algorithm.
This is closely related to the [http://en.wikipedia.org/wiki/Lasso_(statistics)#LASSO_method Least Absolute Shrinkage and Selection Operator] (LASSO) algorithm.
Like the original <math>l_1</math> formulation, it aims to minimize the sum of elements of <math>x</math>, making it appropriate for compressed sensing.
Like the original <math>l_1</math> formulation, it aims to minimize the sum of elements of <math>s</math>, making it appropriate for compressed sensing.


With noise considered, the authors report an error rate of,
With noise considered, the authors report an error rate of,


:<math>\|x^* - x \|_{l_2} \le C_0 \| x - x_s \|_{l_1} / \sqrt{k} + C_1 \epsilon</math>.
:<math>\|\tilde{s} - s \|_{l_2} \le C_0 \| s - s_K \|_{l_1} / \sqrt{k} + C_1 \epsilon,</math>


where <math>s</math> is the true signal, <math>s_k</math> is the true signal where all but the <math>k</math> largest non-zero entries are included and <math>\tilde{s}</math> is the estimated values of <math>s</math>.
Intuitively, this means the total error is bounded by the error one would obtain without noise, and the error due to the noise itself.
Intuitively, this means the total error is bounded by the error one would obtain without noise, and the error due to the noise itself.
The authors report <math>C_0</math> and <math>C_1</math> to be typically small.
The authors report <math>C_0</math> and <math>C_1</math> to be typically small.
This means the noise levels output by the system are directly proportional to the noise levels input.
This means the noise levels output by the system are directly proportional to the noise levels input.
This is a very good result, since a bit of noise does not cause the algorithm to perform any worse.
This is a very good result, since a bit of noise does not cause the algorithm to perform any worse.
==Applications==
The paper provides an idea of some ways compressive sensing can be used in practice:
*'''Data Compression''' - if <math>\Psi</math> is unknown or impractical for encoding (and used only for decoding), then a randomly generated <math>\Phi</math> (i.e. that is non-RIP with exponentially small probability as discussed above) can be used for encoding.
*'''Acquiring Data''' - It might be desirable (or the only practical option) to get a full set of discrete samples from a signal. CS can guide the creation of hardware to solve this problem.


= Conclusion =
= Conclusion =
Line 128: Line 154:
Now, the number of measurements is largely governed by the sparsity of the signal.
Now, the number of measurements is largely governed by the sparsity of the signal.
Ultimately, this has huge implications for the foundations of electronic measurement and data acquisition.
Ultimately, this has huge implications for the foundations of electronic measurement and data acquisition.
= References =
<references />

Latest revision as of 08:45, 30 August 2017

Motivation

In order to make quantitative observations about our environment, we must often acquire signals. And to acquire these signals, we must have some minimum number of samples in order to exactly reconstruct the signal. This minimum number of samples, or rather the minimum rate at which the signal must be sampled, is called the Nyquist rate.

So, we have been stuck with either following the Nyquist criterion, or else being left to deal with imprecise results, corrupted with artifacts like aliasing. This is particularly problematic when sampling is very expensive. For example, what if the measurement device is expensive? Or what if the time requirements are particularly expensive, as in the case of MRI?

Recently, work by Emmanuel Candes and others <ref name="CandesWakin">E. Candes and M. Wakin. An introduction to compressive sampling. IEEE Signal Processing Magazine,25(2):21-30,2008</ref><ref name="CandesTao">E. Candes and T. Tao. Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? IEEE Transactions on Information Theory, 52(12):5406-5425, 2006.</ref> demonstrated that it is possible to exactly reconstruct some signals with undersampled data. If a signal is sparse, meaning that it does not have many features, we can take far fewer measurements and utilize knowledge of the structure of the signal to infer the rest. And, it turns out, the results are astonishingly good; we can exactly reconstruct sparse signals with far fewer measurements than were previously thought possible.

The following summarizes the work presented in <ref name="CandesWakin" />.

Formulation

A sampled signal can be modeled as

[math]\displaystyle{ y = \Phi x }[/math]

where [math]\displaystyle{ y }[/math] is an [math]\displaystyle{ m }[/math] dimensional vector of sampled values, [math]\displaystyle{ x }[/math] is an [math]\displaystyle{ n }[/math] dimensional vector of the exact signal, and [math]\displaystyle{ \Phi }[/math] is a [math]\displaystyle{ m }[/math] by [math]\displaystyle{ n }[/math] matrix which samples the exact signal. An example of this could be [math]\displaystyle{ y }[/math] as the CCD sensor measurements in a digital camera, [math]\displaystyle{ \Phi }[/math] the acquisition process, and [math]\displaystyle{ x }[/math] the real world. If the signal is well-posed, meaning we've done a sufficient amount of sampling, we'll be able to reconstruct [math]\displaystyle{ x }[/math] perfectly. But what if [math]\displaystyle{ m \lt \lt n }[/math], meaning that we've undersampled? This becomes an ill-posed or under-determined problem.

Now, consider that many signals can be summarized succinctly in some other basis - that is, the problem of compression or dimensionality reduction. Returning to the camera example, the camera may apply a discrete cosine transform (DCT) for JPEG compression in order to reduce the amount of space required to store the image. So, we know it is possible to compress sparse signals, but only once we have a sufficiently large number of samples.

Compressive sensing combines these two stages, sensing (sampling) and compression, in order to solve this under-determined problem. It asks, if we can compress our exact signal [math]\displaystyle{ x }[/math] into some compressed basis [math]\displaystyle{ \Phi }[/math] where it can be represented sparsely as a signal [math]\displaystyle{ s }[/math], then can we solve that same under-determined system if [math]\displaystyle{ m }[/math] is approximately equal to [math]\displaystyle{ s }[/math]? If our sparse signal [math]\displaystyle{ s }[/math] is k-sparse, meaning it has [math]\displaystyle{ k }[/math] nonzero elements, we can fix our solution for [math]\displaystyle{ s }[/math] in [math]\displaystyle{ k }[/math] dimensions, and do some kind of optimization for the remaining [math]\displaystyle{ n-k }[/math] elements. Mathematically,

[math]\displaystyle{ y = \Phi x = \Phi \Psi s = \Theta s }[/math]

Where [math]\displaystyle{ \Psi }[/math] is a compressed basis, and [math]\displaystyle{ \Theta }[/math] is the compressive sensing matrix---the product of the compression and sensing bases. So, we can take some small number of samples [math]\displaystyle{ y }[/math], compute the sparse representation [math]\displaystyle{ s }[/math] of our exact signal [math]\displaystyle{ x }[/math], and then apply the inverse compression approximation to recover [math]\displaystyle{ x }[/math].

So the two fundamental premises underlying CS: sparsity and incoherence.

Sparsity:The implication of sparsity is now clear: when a signal has a sparse expansion, one can discard the small coefficients without much perceptual loss. Formally, consider [math]\displaystyle{ f_{S}(t) }[/math] obtained by keeping only the terms corresponding to the [math]\displaystyle{ S }[/math] largest values of [math]\displaystyle{ (x_i) }[/math] in the expansion [math]\displaystyle{ f(t)=\sum_{i=1}^{n}x_i\psi_i(x) }[/math].By definition, [math]\displaystyle{ f_S =\Psi x_S }[/math], where [math]\displaystyle{ x_S }[/math] is the vector of coefficients [math]\displaystyle{ (x_i) }[/math] with all but the largest [math]\displaystyle{ S }[/math] set to zero.

Incoherence: In plain English, the coherence measures the largest correlation between any two elements of [math]\displaystyle{ \Phi }[/math] and [math]\displaystyle{ \Psi }[/math]. If [math]\displaystyle{ \Phi }[/math] and [math]\displaystyle{ \Psi }[/math] contain correlated elements, the coherence is large. Otherwise, it is small.

Ideally, we would like to measure all the [math]\displaystyle{ n }[/math] coefficients of [math]\displaystyle{ f }[/math], but we only get to observe a subset of these and collect the data .But, to do this, we must answer three broad questions:

  • How do we pick [math]\displaystyle{ \Theta }[/math]?
  • How can we do the optimization, and how can we be sure it will work?
  • Is this algorithm practical? How does it work with noise?

Choosing a Compressive Sensing Matrix [math]\displaystyle{ \Theta }[/math]

To guarantee that that the compressive sensing matrix is stable, the authors derive the Restricted Isometry Property (RIP) in <ref name="CandesTao2">E. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans. Inform. Theory, vol. 51, no. 12, Dec. 2005.</ref>. Mathematically this property can be stated as,

[math]\displaystyle{ (1-\delta_k)\|x\|_{\ell_2}^2 \le \|A x\|_{\ell_2}^2 \le (1+\delta_k)\|x\|_{\ell_2}^2 }[/math]

This formula implies that [math]\displaystyle{ A }[/math] must be a distance preserving transformation for all k-sparse vectors [math]\displaystyle{ x }[/math], bounded by some constant [math]\displaystyle{ \delta_k }[/math], known as the restricted isometry constant. [math]\displaystyle{ A }[/math] satisfies the RIP of order [math]\displaystyle{ k }[/math] if [math]\displaystyle{ \delta_k }[/math] is not too close to 1. When this property holds, all [math]\displaystyle{ k }[/math]-subsets of the columns of [math]\displaystyle{ A }[/math] are nearly orthogonal. If this property does not hold then it is possible for a [math]\displaystyle{ k }[/math]-sparse signal to be in the null space of [math]\displaystyle{ A }[/math] and in this case it may be impossible to reconstruct these vectors.

Evaluating the RIP property is itself intractable (NP-hard), so while it serves as a theoretical foundation for compressive sensing, it still does not reveal a great deal in telling us how to build the compressive sensing matrix.

To do this, let's have a look at how [math]\displaystyle{ \Theta }[/math] is constructed.

[math]\displaystyle{ \Theta = \Phi \Psi }[/math]

Is there something about these bases, or about their relationship with each other, that will guarantee RIP? Bases which are maximally incoherent will satisfy this restricted isometry property, so we can use this to guide us as we choose an appropriate compressive sensing matrix. Mathematically, coherence is defined as,

[math]\displaystyle{ \mu(\Psi, \Phi) = \sqrt{n} \max_{1 \le k, j \le n} |\langle \psi_k,\phi_j \rangle| }[/math]

Note that this function takes on values between 1 and [math]\displaystyle{ \sqrt{n} }[/math]. So, to find a suitable [math]\displaystyle{ \Theta }[/math], we'll need to find two bases which are incoherent.

We know [math]\displaystyle{ \Phi }[/math] will model the sampling phase and [math]\displaystyle{ \Psi }[/math] the compression, so we must at least select matrices with these properties.

Based on the definition of coherence, spikes and sinusoids will be incoherent, so we could choose [math]\displaystyle{ \Phi }[/math] as a matrix of impulses and [math]\displaystyle{ \Psi }[/math] as a compression basis of sinusoids. Another option could be noiselets and wavelets for the sensing and compression, respectively. But must we be so restricted?

It turns out that random noise is incoherent with just about everything. Namely, in most cases, taking iid samples from some random distribution (i.e. a Gaussian with mean of zero or uniformally from a unit sphere in [math]\displaystyle{ \mathbb{R}^m }[/math]) will result in a matrix violating the RIP property with exponentially small (in m) probability. Thus, using these sorts of sensing matrices (or others than we can prove have the same properties) are ideal to use our sampling matrix.

Now, we know how to construct a stable compressive sensing matrix [math]\displaystyle{ \Theta }[/math], but how do we actually solve the under-determined system?

Optimization

We can start with the standard approach---least squares. It's a convenient choice since it has a closed-form solution, but it turns out to give very bad results in this context.

What about the [math]\displaystyle{ \ell_0 }[/math] norm? Intuitively, this counts the number of zeros in the vector - that's exactly what we want to minimize! But, it turns out that we'd have to try every [math]\displaystyle{ {n \choose k} }[/math] combination of zeros to find the solution, which is NP-hard, and thus intractable. The authors discovered that we can solve this using the [math]\displaystyle{ \ell_1 }[/math] norm and obtain exact results. They concluded that the reconstruction is nearly as good as what an oracle can provide us by a perfect knowledge of the object.

Figure 1: illustrates the exact result given by the [math]\displaystyle{ \ell_1 }[/math] compared to the inaccuracy of, for example, the [math]\displaystyle{ \ell_2 }[/math] norm.

Figure 1: Comparison of [math]\displaystyle{ \ell_1 }[/math] and [math]\displaystyle{ \ell_2 }[/math] norms for recovering signals. Diagram from <ref name="CandesWakin" />
Figure 1: Comparison of [math]\displaystyle{ \ell_1 }[/math] and [math]\displaystyle{ \ell_2 }[/math] norms for recovering signals. Diagram from <ref name="CandesWakin" />

So the problem we are trying to solve can be stated formally as:

[math]\displaystyle{ \min_{s \in \mathbb{R}^n} \|s\|_1 \quad \mbox{subject to} \quad y = \Theta s }[/math].

This is a well-established problem known as basis pursuit. Basis pursuit problems can easily be transformed into a linear programming problem. Suppose we let every [math]\displaystyle{ s_i = s_i^+-s_i^- }[/math] where both [math]\displaystyle{ s_i^+ }[/math] and [math]\displaystyle{ s_i^- }[/math] are non-negative. Then [math]\displaystyle{ |s_i| = s_i^++s_i^- }[/math] and the optimization problem can be rewritten as

[math]\displaystyle{ \min_{s^+,s^- \in \mathbb{R}^n} \sum_i s_i^++s_i^ \quad \mbox{subject to} \quad y = \Theta s^+ + \Theta s^- \mbox{ and } s^+, s^- \geq 0 }[/math].

This problem can be solved using the simplex method (linear programming), and has a worst-case complexity of [math]\displaystyle{ O(n^3) }[/math].

Now, we know how to construct and solve this system. But, how well does it work? And, more specifically, how well does it work with noise?

Performance and Robustness

First, let's address the question of how well the technique works. The result matches intuition---the number of samples required to reconstruct a signal is related to the incoherence of bases, sparsity and the dimension of the signal we wish to reconstruct. The minimum bound of [math]\displaystyle{ m }[/math], the number of samples, is given by,

[math]\displaystyle{ m \geq C K \log(\frac{N}{K}) }[/math]

if the bases are maximally incoherent, for example if we use random projections. In practice, the typical number of samples has been found to be approximately [math]\displaystyle{ m \ge 4k }[/math].

However, even if [math]\displaystyle{ m }[/math] is sufficiently small, this isn't practical if the method is not robust. Let's consider our basic formulation, and add some error,

[math]\displaystyle{ y = \Theta s + z, }[/math]

where [math]\displaystyle{ z }[/math] is some random noise, for example Gaussian white noise [math]\displaystyle{ \,z \sim N(0,1/m) }[/math]. To solve this, let's relax the original [math]\displaystyle{ l_1 }[/math] formulation to

[math]\displaystyle{ \min_\tilde{s} \|\tilde{s}\|_{l_1} \quad \mbox{subject to} \quad \| \Theta \tilde{s} - y \| \le \epsilon }[/math],

where [math]\displaystyle{ \epsilon }[/math] is the error bound. This is closely related to the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm. Like the original [math]\displaystyle{ l_1 }[/math] formulation, it aims to minimize the sum of elements of [math]\displaystyle{ s }[/math], making it appropriate for compressed sensing.

With noise considered, the authors report an error rate of,

[math]\displaystyle{ \|\tilde{s} - s \|_{l_2} \le C_0 \| s - s_K \|_{l_1} / \sqrt{k} + C_1 \epsilon, }[/math]

where [math]\displaystyle{ s }[/math] is the true signal, [math]\displaystyle{ s_k }[/math] is the true signal where all but the [math]\displaystyle{ k }[/math] largest non-zero entries are included and [math]\displaystyle{ \tilde{s} }[/math] is the estimated values of [math]\displaystyle{ s }[/math]. Intuitively, this means the total error is bounded by the error one would obtain without noise, and the error due to the noise itself. The authors report [math]\displaystyle{ C_0 }[/math] and [math]\displaystyle{ C_1 }[/math] to be typically small. This means the noise levels output by the system are directly proportional to the noise levels input. This is a very good result, since a bit of noise does not cause the algorithm to perform any worse.

Applications

The paper provides an idea of some ways compressive sensing can be used in practice:

  • Data Compression - if [math]\displaystyle{ \Psi }[/math] is unknown or impractical for encoding (and used only for decoding), then a randomly generated [math]\displaystyle{ \Phi }[/math] (i.e. that is non-RIP with exponentially small probability as discussed above) can be used for encoding.
  • Acquiring Data - It might be desirable (or the only practical option) to get a full set of discrete samples from a signal. CS can guide the creation of hardware to solve this problem.

Conclusion

We have seen that we can reconstruct signals with far fewer measurements than were previously thought possible. Now, the number of measurements is largely governed by the sparsity of the signal. Ultimately, this has huge implications for the foundations of electronic measurement and data acquisition.

References

<references />