# Motivation

In order to make quantitative observations about our environment, we must often acquire signals. And to acquire these signals, we must have some minimum number of samples in order to exactly reconstruct the signal. This minimum number of samples, or rather the minimum rate at which the signal must be sampled, is called the Nyquist rate.

So, we have been stuck with either following the Nyquist criterion, or else being left to deal with imprecise results, corrupted with artifacts like aliasing. This is particularly problematic when sampling is very expensive. For example, what if the measurement device is expensive? Or what if the time requirements are particularly expensive, as in the case of MRI?

Recently, work by Emmanuel Candes and others <ref name="CandesWakin">E. Candes and M. Wakin. An introduction to compressive sampling. IEEE Signal Processing Magazine,25(2):21-30,2008</ref><ref name="CandesTao">E. Candes and T. Tao. Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? IEEE Transactions on Information Theory, 52(12):5406-5425, 2006.</ref> demonstrated that it is possible to exactly reconstruct some signals with undersampled data. If a signal is sparse, meaning that it does not have many features, we can take far fewer measurements and utilize knowledge of the structure of the signal to infer the rest. And, it turns out, the results are astonishingly good; we can exactly reconstruct sparse signals with far fewer measurements than were previously thought possible.

The following summarizes the work presented in <ref name="CandesWakin" />.

# Formulation

A sampled signal can be modeled as

$y = \Phi x$

where $y$ is an $m$ dimensional vector of sampled values, $x$ is an $n$ dimensional vector of the exact signal, and $\Phi$ is a $m$ by $n$ matrix which samples the exact signal. An example of this could be $y$ as the CCD sensor measurements in a digital camera, $\Phi$ the acquisition process, and $x$ the real world. If the signal is well-posed, meaning we've done a sufficient amount of sampling, we'll be able to reconstruct $x$ perfectly. But what if $m \lt \lt n$, meaning that we've undersampled? This becomes an ill-posed or under-determined problem.

Now, consider that many signals can be summarized succinctly in some other basis - that is, the problem of compression or dimensionality reduction. Returning to the camera example, the camera may apply a discrete cosine transform (DCT) for JPEG compression in order to reduce the amount of space required to store the image. So, we know it is possible to compress sparse signals, but only once we have a sufficiently large number of samples.

Compressive sensing combines these two stages, sensing (sampling) and compression, in order to solve this under-determined problem. It asks, if we can compress our exact signal $x$ into some compressed basis $\Phi$ where it can be represented sparsely as a signal $s$, then can we solve that same under-determined system if $m$ is approximately equal to $s$? If our sparse signal $s$ is k-sparse, meaning it has $k$ nonzero elements, we can fix our solution for $s$ in $k$ dimensions, and do some kind of optimization for the remaining $n-k$ elements. Mathematically,

$y = \Phi x = \Phi \Psi s = \Theta s$

Where $\Psi$ is a compressed basis, and $\Theta$ is the compressive sensing matrix---the product of the compression and sensing bases. So, we can take some small number of samples $y$, compute the sparse representation $s$ of our exact signal $x$, and then apply the inverse compression approximation to recover $x$.

So the two fundamental premises underlying CS: sparsity and incoherence.

Sparsity:The implication of sparsity is now clear: when a signal has a sparse expansion, one can discard the small coefficients without much perceptual loss. Formally, consider $f_{S}(t)$ obtained by keeping only the terms corresponding to the $S$ largest values of $(x_i)$ in the expansion $f(t)=\sum_{i=1}^{n}x_i\psi_i(x)$.By definition, $f_S =\Psi x_S$, where $x_S$ is the vector of coefficients $(x_i)$ with all but the largest $S$ set to zero.

Incoherence: In plain English, the coherence measures the largest correlation between any two elements of $\Phi$ and $\Psi$. If $\Phi$ and $\Psi$ contain correlated elements, the coherence is large. Otherwise, it is small.

Ideally, we would like to measure all the $n$ coefficients of $f$, but we only get to observe a subset of these and collect the data .But, to do this, we must answer three broad questions:

• How do we pick $\Theta$?
• How can we do the optimization, and how can we be sure it will work?
• Is this algorithm practical? How does it work with noise?

# Choosing a Compressive Sensing Matrix $\Theta$

To guarantee that that the compressive sensing matrix is stable, the authors derive the Restricted Isometry Property (RIP) in <ref name="CandesTao2">E. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans. Inform. Theory, vol. 51, no. 12, Dec. 2005.</ref>. Mathematically this property can be stated as,

$(1-\delta_k)\|x\|_{\ell_2}^2 \le \|A x\|_{\ell_2}^2 \le (1+\delta_k)\|x\|_{\ell_2}^2$

This formula implies that $A$ must be a distance preserving transformation for all k-sparse vectors $x$, bounded by some constant $\delta_k$, known as the restricted isometry constant. $A$ satisfies the RIP of order $k$ if $\delta_k$ is not too close to 1. When this property holds, all $k$-subsets of the columns of $A$ are nearly orthogonal. If this property does not hold then it is possible for a $k$-sparse signal to be in the null space of $A$ and in this case it may be impossible to reconstruct these vectors.

Evaluating the RIP property is itself intractable (NP-hard), so while it serves as a theoretical foundation for compressive sensing, it still does not reveal a great deal in telling us how to build the compressive sensing matrix.

To do this, let's have a look at how $\Theta$ is constructed.

$\Theta = \Phi \Psi$

Is there something about these bases, or about their relationship with each other, that will guarantee RIP? Bases which are maximally incoherent will satisfy this restricted isometry property, so we can use this to guide us as we choose an appropriate compressive sensing matrix. Mathematically, coherence is defined as,

$\mu(\Psi, \Phi) = \sqrt{n} \max_{1 \le k, j \le n} |\langle \psi_k,\phi_j \rangle|$

Note that this function takes on values between 1 and $\sqrt{n}$. So, to find a suitable $\Theta$, we'll need to find two bases which are incoherent.

We know $\Phi$ will model the sampling phase and $\Psi$ the compression, so we must at least select matrices with these properties.

Based on the definition of coherence, spikes and sinusoids will be incoherent, so we could choose $\Phi$ as a matrix of impulses and $\Psi$ as a compression basis of sinusoids. Another option could be noiselets and wavelets for the sensing and compression, respectively. But must we be so restricted?

It turns out that random noise is incoherent with just about everything. Namely, in most cases, taking iid samples from some random distribution (i.e. a Gaussian with mean of zero or uniformally from a unit sphere in $\mathbb{R}^m$) will result in a matrix violating the RIP property with exponentially small (in m) probability. Thus, using these sorts of sensing matrices (or others than we can prove have the same properties) are ideal to use our sampling matrix.

Now, we know how to construct a stable compressive sensing matrix $\Theta$, but how do we actually solve the under-determined system?

# Optimization

We can start with the standard approach---least squares. It's a convenient choice since it has a closed-form solution, but it turns out to give very bad results in this context.

What about the $\ell_0$ norm? Intuitively, this counts the number of zeros in the vector - that's exactly what we want to minimize! But, it turns out that we'd have to try every ${n \choose k}$ combination of zeros to find the solution, which is NP-hard, and thus intractable. The authors discovered that we can solve this using the $\ell_1$ norm and obtain exact results. They concluded that the reconstruction is nearly as good as what an oracle can provide us by a perfect knowledge of the object.

Figure 1: illustrates the exact result given by the $\ell_1$ compared to the inaccuracy of, for example, the $\ell_2$ norm.

So the problem we are trying to solve can be stated formally as:

$\min_{s \in \mathbb{R}^n} \|s\|_1 \quad \mbox{subject to} \quad y = \Theta s$.

This is a well-established problem known as basis pursuit. Basis pursuit problems can easily be transformed into a linear programming problem. Suppose we let every $s_i = s_i^+-s_i^-$ where both $s_i^+$ and $s_i^-$ are non-negative. Then $|s_i| = s_i^++s_i^-$ and the optimization problem can be rewritten as

$\min_{s^+,s^- \in \mathbb{R}^n} \sum_i s_i^++s_i^ \quad \mbox{subject to} \quad y = \Theta s^+ + \Theta s^- \mbox{ and } s^+, s^- \geq 0$.

This problem can be solved using the simplex method (linear programming), and has a worst-case complexity of $O(n^3)$.

Now, we know how to construct and solve this system. But, how well does it work? And, more specifically, how well does it work with noise?

# Performance and Robustness

First, let's address the question of how well the technique works. The result matches intuition---the number of samples required to reconstruct a signal is related to the incoherence of bases, sparsity and the dimension of the signal we wish to reconstruct. The minimum bound of $m$, the number of samples, is given by,

$m \geq C K \log(\frac{N}{K})$

if the bases are maximally incoherent, for example if we use random projections. In practice, the typical number of samples has been found to be approximately $m \ge 4k$.

However, even if $m$ is sufficiently small, this isn't practical if the method is not robust. Let's consider our basic formulation, and add some error,

$y = \Theta s + z,$

where $z$ is some random noise, for example Gaussian white noise $\,z \sim N(0,1/m)$. To solve this, let's relax the original $l_1$ formulation to

$\min_\tilde{s} \|\tilde{s}\|_{l_1} \quad \mbox{subject to} \quad \| \Theta \tilde{s} - y \| \le \epsilon$,

where $\epsilon$ is the error bound. This is closely related to the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm. Like the original $l_1$ formulation, it aims to minimize the sum of elements of $s$, making it appropriate for compressed sensing.

With noise considered, the authors report an error rate of,

$\|\tilde{s} - s \|_{l_2} \le C_0 \| s - s_K \|_{l_1} / \sqrt{k} + C_1 \epsilon,$

where $s$ is the true signal, $s_k$ is the true signal where all but the $k$ largest non-zero entries are included and $\tilde{s}$ is the estimated values of $s$. Intuitively, this means the total error is bounded by the error one would obtain without noise, and the error due to the noise itself. The authors report $C_0$ and $C_1$ to be typically small. This means the noise levels output by the system are directly proportional to the noise levels input. This is a very good result, since a bit of noise does not cause the algorithm to perform any worse.

## Applications

The paper provides an idea of some ways compressive sensing can be used in practice:

• Data Compression - if $\Psi$ is unknown or impractical for encoding (and used only for decoding), then a randomly generated $\Phi$ (i.e. that is non-RIP with exponentially small probability as discussed above) can be used for encoding.
• Acquiring Data - It might be desirable (or the only practical option) to get a full set of discrete samples from a signal. CS can guide the creation of hardware to solve this problem.

# Conclusion

We have seen that we can reconstruct signals with far fewer measurements than were previously thought possible. Now, the number of measurements is largely governed by the sparsity of the signal. Ultimately, this has huge implications for the foundations of electronic measurement and data acquisition.

<references />