# Difference between revisions of "compressive Sensing"

## Introduction

Faithfully and efficiently constructing a signal from a coded signal is an age old scientific problem. The Shannon/Nyquist sampling theorem states that in order to reconstruct a signal from a sampled signal without incurring artifacts such as aliasing, it is necessary to sample at least two times faster than the maximum signal bandwidth. However, in many applications such as digital image and video cameras, the Nyquist rate will result in an excessive number of samples, and making compression becomes a necessity before any storage or transmission of the samples can be made. Furthermore, in other applications such as medical scanners and radar, it is usually too costly to increase the sampling rate. This strict, and often costly, Nyquist criterion motivates a need for alternative methods. Compressive sensing, which is outlined in this paper summary, is one such approach that avoids the use of a high sampling rate for capturing signals.

Compressive sensing is a method of capturing and representing compressible signals at a rate significantly below the Nyquist rate. This method employs nonadaptive linear projections that maintain the structure of the signal, which is then reconstructed using the optimization techniques outlined in <ref name="R1"> C. Dick, F. Harris, and M. Rice, “Synchronization in software defined radios—Carrier and timing recovery using FPGAs,” in Proc. IEEE Symp. Field-Programmable Custom Computing Machines, NapaValley, CA, pp. 195–204, Apr. 2000.</ref> <ref name="R2">J. Valls, T. Sansaloni, A. Perez-Pascual, V. Torres, and V. Almenar, “The use of CORDIC in software defined radios: A tutorial,” IEEE Commun. Mag., vol. 44, no. 9, pp. 46–50, Sept. 2006.</ref>.

In summary, in this paper we will learn the definition of compressible signal, how to construct a compressible signal and then how to reconstruct the compressed signal.

## Compressible Signals

Let us define a discrete time signal $\ x$ such that it has following properties:

• Real valued
• Finite length
• One dimensional

Then $\ x$ is a $N \times 1$ column vector in $\mathbb{R}^n$ with elements $x[n], \; n=1,2,....N$. Note that an image or any other high-dimensional data point is pre-processed by first representing it as a long one-dimensional vector. To define a signal in $\mathbb{R}^n$ we define the $N \times 1$ basis vectors $\, {\psi_i}, i = 1,\dots,n$. To keep it simple, assume that the basis is orthonormal. Using a $N \times N$ basis matrix $\,\psi = [\psi_1|\psi_2|......|\psi_N]$, the signal $\ x$ can be expressed as

$x=\sum_{i=1}^N s_{i} \psi_{i} \; \textrm{ or } \; x=\psi s$,

where $\ s$ is a $N \times 1$ column vector of weighting coefficients $s_i=\langle x,\psi_i \rangle=\psi^{T}_i x$.
It is easy to see that both $\,x$ and $\,s$ are equivalent representations of the given signal that is to be captured, with $\,x$ being in the time or space domain (depending on the nature of the signal) and $\,s$ being in the $\,\psi$ domain. The signal is $\ K$-sparse if it is a linear combination of only $\ K$ basis vectors. The signal will be compressible if the above representation has just a few large coefficients and many small coefficients. We shall now briefly overview how the transform coding of signals that uses a sample-then-compress framework is done in data acquisition systems such as digital cameras (for which transform coding plays a central role). The procedure of this type of transform coding can be described in the following steps:

Step 1: The full $\,N$-sample signal $x$ is acquired.
Step 2: The complete set of transform coefficients $\,{si}$ is computed via $\,s = \psi^T x$.
Step 3: The $\,K$ largest coefficients are located.
Step 4: The $\,(N$$\,K)$ smallest coefficients are discarded.
Step 5: The $\,K$ values and locations of the largest coefficients are encoded.

Please note that the transform coding that uses a sample-then-compress framework has the following inherent inefficiencies:

• The initial number of samples $\,N$ may be very large even if the desired $\ K$ is small.
• The set of all $\ N$ transform coefficients $\ {s_i}$ must be calculated even though all but $\ K$ of them will be discarded.
• The locations of the large coefficients must be encoded, thus introducing an overhead to the algorithm.

## The Problem Statement for the Compressive Sensing Problem

Compressive sensing directly acquires the compressed signal without going through the intermediate stage of acquiring $\ N$ samples. In order to do this, a general linear measurement process is defined such that $\, M \lt N$ inner products between $\ x$ and a collection of vectors $\{\phi \}_{j=1}^M$ are computed to give $y_{j}=\langle x, \phi_j \rangle$.
Ordering the new measurement vectors $\ y_j$ in a $M\times 1$ vector $\ y$, and the measurement vectors $\phi_{j}^{T}$ as rows in an $M \times N$ matrix $\ \phi$, we can then use $\ x=\psi s$ to get

$\ y=\phi x=\phi \psi s=\theta s$,

where $\ \theta=\phi \psi$ is an $M \times N$ matrix. This measurement process is nonadaptive, which implies that $\ \phi$ is fixed and does not depend on the signal $\ x$. So, in order to design a viable compressive sensing method the following two conditions must be met:

• A stable measurement matrix $\ \phi$ must be constructed which preserves the information while reducing the dimension from $\ N$ to $\ M$.
• A reconstructing algorithm must be designed which recovers $\ x$ from $\ y$ by using only $M \approx K$ measurements (which is roughly the number of coefficients recorded by the traditional transform coder mentioned earlier).

## Part 1 of the Solution: Constructing a Stable Measurement Matrix

Given the $\, M$ available measurements, $\, y$, where $\,M \lt N$,$\,\phi$ must be able to reconstruct the signal $\, x$ with a high level of accuracy. If $\,x$ is $\,K$-sparse and the locations of the non-zero coefficients in $\,s$ is known, then the problem can be solved provided $M \geq K$. For this problem to be well conditioned we must have the following necessary and sufficient condition.

$1-\epsilon \leq \frac{\parallel \theta v\parallel_2}{\parallel v\parallel_2}\leq 1+\epsilon$,

where $\,v$ is a vector sharing the same non-zero entries as $\,s$ and $\epsilon \geq 0$, and where $\epsilon$ is the restrictive isometry constant. This means $\,\theta$ must preserve the lengths of these particular K-sparse vectors. This condition is referred to as the restricted isometry property (RIP)<ref name="R1"> C. Dick, F. Harris, and M. Rice, “Synchronization in software defined radios—Carrier and timing recovery using FPGAs,” in Proc. IEEE Symp. Field-Programmable Custom Computing Machines, NapaValley, CA, pp. 195–204, Apr. 2000.</ref> <ref name="R2">J. Valls, T. Sansaloni, A. Perez-Pascual, V. Torres, and V. Almenar, “The use of CORDIC in software defined radios: A tutorial,” IEEE Commun. Mag., vol. 44, no. 9, pp. 46–50, Sept. 2006.</ref>. Note that, in practice, it is typically impossible to known the locations of the $\,K$ non-zero entries in $\,s$. However, a sufficient condition for a stable solution for both K-sparse signals and compressible signals is that $\,\theta$ satisfies the RIP for an arbitrary $\,3K$-sparse vector $\,v$. A similar property to the RIP called incoherence requires that the rows $\,\{\phi_j\}$ of $\,\phi$ cannot sparsely represent the columns $\,\{\psi_i\}$ of $\,\psi$, and vice versa. Consequently, both $\,K$-sparse signals and compressible signals of length $\,N$ can be reconstructed using only $\,M \ge cK log(N/K) \lt \lt N$ random Gaussian measurements.

The RIP and incoherence conditions can be satisfied with high probability simply by choosing a random matrix, $\,\phi$. For example, let us construct a matrix $\,\phi$ such that its matrix elements $\,\phi_{i,j}$ are independent and identically distributed (iid) random variables from a Gaussian probability distribution with mean zero and variance $\,1/N$ <ref name="R1"/><ref name="R2"/><ref name="R4> R.G. Baraniuk, M. Davenport, R. DeVore, and M.D. Wakin, "A simples proof of the restricted isometry principle for random matrices (aka the Johnson-Linstrauss lemma meets compressed sensing)," Constructive Approximation, 2007.</ref>. Then, the measurements $\,y$ are merely $\,M$ different randomly weighted linear combinations of the elements of $\,x$ and the Gaussian measurement matrix $\,\phi$ has the following properties:

1. The incoherence property is satisfied by the matrix $\,\phi$ with the basis $\,\psi = I$ with high probability since it is unlikely that the rows of the Gaussian matrix $\,\phi$ will sparsely represent the columns of the identity matrix and vice versa. In this case, if $\, M \geq cK\log(N/K)$ with $\,c$ a small constant, then $\,\theta = \phi\psi = \phi I = \phi$ will satisfy the RIP with high probability.
2. The matrix $\,\phi$ is universal, meaning that $\,\theta$ will satisfy the RIP with high probability regardless of the choice of the orthonormal basis $\,\psi$.

## Part 2 of the Solution: Designing a Reconstruction Algorithm for Signals

Now we are left with the task of designing a reconstruction algorithm. The reconstruction algorithm must take into account the $\,M$ measurements in the vector $\,y$, the random measurement matrix $\,\phi$ (or the seed that was used to generate it) and the basis $\,\psi$ and then reconstruct the length $\,N$ signal $\,x$ or its sparse coefficient vector $\,s$. Since $\,M\lt N$, the system is underdetermined (having fewer equations than variables), and thus we have infinitely many $\,s'$ which satisfy

$\,\theta s'=y$.

The justification for this is that if $\,\theta s=y$ then $\,\theta (s+r)=y$ for any vector $\,r$ in the null space of $\,\theta$, denoted $\,N(\theta)$ . This suggests finding the signal's sparse coefficient vector in the $\,(N-M)$ dimensional translated null space $\,H=N(\theta)+s$. Related to the concept of Lp space, the $\,l_p$ norm of the vector $\,s$ is $\,(||s||_p)^p = \sum_{i=1}^N |s_i|^p$. With this concept in mind, the reconstruction process can be attempted in the following ways that make use of the concept of $\,l_p$ norm.

#### Minimum $\,l_2$ norm reconstruction

In order to find the vector $\,s$, the classical approach is to find the vector in the transformed null space with the smallest $\,l_2$ norm (also called energy norm) by solving

$\hat{s}=\arg\min\parallel s'\parallel_2 \; \textrm{ such } \; \textrm{ that } \; \theta s'=y.$

The closed form solution is given by $\hat{s}=\theta^{T}(\theta \theta^{T})^{-1}y$. However, the minimization returns non-sparse $\hat{s}$ with many non-zero elements. In other words, it is almost never able to reconstruct the original signal.

#### Minimum $\,l_0$ norm reconstruction

The drawback of using the $\,l_2$ norm in reconstruction is that the $\,l_2$ norm measures signal energy rather than signal sparsity. For reconstruction, the use of the $\,l_0$ norm is much more suitable, because the $\,l_0$ norm counts the number of non-zero entries in $\,s$, i.e. the $\,l_0$ norm of a $\,K$-sparse vector is simply $\,K$. Let us now evaluate the $\,l_0$ norm, also known as the cardinality function (since it counts the number of non-zero entries in $\,s$). In this case the problem formulation is

$\hat{s}=\arg\min\parallel s'\parallel_0 \; \textrm{such} \; \textrm{that} \; \theta s'=y$

At first glance this approach looks attractive since it recovers the K-sparse signal exactly with high probability using only $\,M = K+1$ i.i.d Gaussian measurements. However, solving the equation is numerically unstable and NP-complete which requires an exhaustive search of all $(_K^{N})$ possible locations of the non-zero entries in $\,s$.

#### Minimum $\,l_1$ norm reconstruction

Now, if we perform minimization using the $\,l_1$ norm , we are able to recover $\,K$-sparse signals and closely approximate the compressible signal with high probability using only $M \geq cK\log(\frac{N}{K})$ i.i.d Gaussian measurements. We aim to solve

$\hat{s}=\arg\min\parallel s'\parallel_1 \; \textrm{such} \; \textrm{that} \; \theta s'=y.$

This optimization problem is convex and reduces to a linear program known as basis pursuit<ref name="R1"> C. Dick, F. Harris, and M. Rice, “Synchronization in software defined radios—Carrier and timing recovery using FPGAs,” in Proc. IEEE Symp. Field-Programmable Custom Computing Machines, NapaValley, CA, pp. 195–204, Apr. 2000.</ref> <ref name="R2">J. Valls, T. Sansaloni, A. Perez-Pascual, V. Torres, and V. Almenar, “The use of CORDIC in software defined radios: A tutorial,” IEEE Commun. Mag., vol. 44, no. 9, pp. 46–50, Sept. 2006.</ref>, and has computational complexity $\,O(N^3)$.

Figure 2 from the original paper [5] illustrates these three approaches for reconstructing signals. For convenience, this figure is reproduced below.

Furthermore, other related reconstruction algorithms can be found in [6] and [7] of the original paper.

## Example

The paper discusses a practical example of the technique of compressive sensing using a "single-pixel compressive sensing camera"<ref name = "R3"> D. Takhar, V. Bansal, M. Wakin, M. Duarte, D. Baron, J. Laska, K.F. Kelly, and R.G. Baraniuk, “A compressed sensing camera: New theory and an implementation using digital micromirrors,” in Proc. Comput. Imaging IV SPIE Electronic Imaging, San Jose, Jan. 2006.</ref>. The main idea is that the camera has a mirror for each pixel, and randomly, the mirror either reflects the light to something that can use it (a photodiode) or it doesn't. Thus, $\,y_i$ is the voltage at each photodiode and as in the problem description, it is the inner product of the image we want, $\,x$, and $\,\phi_i$. Here, $\,\phi_i$ is a vector of ones and zeros, indicating whether mirrors direct light towards the photodiode or not. This can be repeated to get $\,y_1,\dots,y_M$. The image can be reconstructed using the techniques discussed. This example can be seen in the following diagram. The image in part (b) is from a conventional digital camera. The image in part (c) is constructed using the single-pixel camera. This method requires 60% fewer random measurements than reconstructed pixels.

## Robustness

In practice, no measurement system is perfect---devices do not have infinite precision. Thus, it is necessary that compressed sensing continue to recover signals relatively well, even in the presence of noise. Fortunately, it has been shown that the error bound of compressive sensing is proportional to the error which results from the approximation to a sparse signal (the change of basis), plus the noise which is input to the system.

More formally,

$\parallel x^* - x \parallel_{l_2} \le C_0 \parallel x - x_s \parallel_{l_1} / \sqrt{S} + C_1 \epsilon$

where $\,x^*$ is the signal recovered under noisy conditions, $\,x_s$ is the vector $\,x$ with all but the largest $\,S$ components set to zero, and $\,\epsilon$ is the error bound in the noise. Here, $\,C_0$ and $\,C_1$ are generally small.

This is a very important result, since it means that the noise produced by compressive sensing is directly proportional to the noise of the measurements. It is this result that ultimately moves compressive sensing from being an interesting academic exercise to something which is pragmatic.

## Conclusion

The compressive sensing algorithm can be applied to analog signals as well. This sensing technique finds many practical applications in image processing and similar fields. In this summary, we learned about compressive sensing which is a more efficient method compared to the traditional transform coding of signals that uses a sample-then-compress framework.

## References

<references />

5. Richard G. Baraniuk. Compressive Sensing.