compressive Sensing: Difference between revisions

From statwiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 20: Line 20:
*Localization of the large coefficients must be encoded, thus introducing overhead.  
*Localization of the large coefficients must be encoded, thus introducing overhead.  
==The problem statement for compressive sensing problem==
==The problem statement for compressive sensing problem==
Compressive sensing directly acquires the compressed signal without going through the intermediate stage of acquiring N samples.
Compressive sensing directly acquires the compressed signal without going through the intermediate stage of acquiring N samples. In order to achieve that a general linear measurement process is defined such that it computes <math> M < N</math> inner products between x and a collection of vectors <math>\{\phi \}_{j=1}^M</math> as in <br />
<math>y_{j}=\langle x, \phi_j \rangle</math><br />
Ordering the new measurement vectors <math>y_j</math> in a <math>M\times 1</math> vector y and the measurement vectors <math>\phi_{j}^{T}</math> as rows in an <math>M \times N</math> matrix <math>\phi</math>. then using previous definition we have <br />
<math>y=\phi x=\phi \psi s=\theta s</math> <br />
Where<math>\theta=\phi \psi</math> is an <math>M \times N</math> matrix. This measurement process in nonadaptive which implies that <math>\phi</math> is fixed and does not depend on the signal x. This now leads to following condition  for designing<br />
*Constructing a stable measurement matrix <math>\phi</math> by preserving the information while reducing the dimension from N to M.
*Designing a reconstructing algorithm such that it recovers x from y by using only <math>M \approx K</math> measurements.
==Constructing a Stable Measurement Matrix==

Revision as of 17:46, 11 November 2010

Still under construction

Introduction

Faithfully and efficiently constructing signal from the coded signal is an age old scientific problem. Shannon/ Nyquist sampling theorem states that in-order to reconstruct signal from captured signal it should be sampled at least twice faster than the maximum signal bandwidth. However, for many practical applications like imaging system the Nyquist sampling rate is very high, paving path to look for alternative methods. To avoid high sampling rate signal compression is one such approach. In this paper summary a method of compressive sensing is outlined. Compressive sensing is a method to capture and represent compressible signals at a rate significantly below Nyquist rate. This method employs nonadaptive linear projections that maintains the structure of signals which is then reconstructed from the optimization technique as outline in <ref name="R1"> C. Dick, F. Harris, and M. Rice, “Synchronization in software defined radios—Carrier and timing recovery using FPGAs,” in Proc. IEEE Symp. Field-Programmable Custom Computing Machines, NapaValley, CA, pp. 195–204, Apr. 2000.</ref> <ref name="R2">J. Valls, T. Sansaloni, A. Perez-Pascual, V. Torres, and V. Almenar, “The use of CORDIC in software defined radios: A tutorial,” IEEE Commun. Mag., vol. 44, no. 9, pp. 46–50, Sept. 2006.</ref>.

In summary, in this paper we will learn the definition of compressible signal, constructing compressible signal and then reconstructing the compressed signal.

Compressible Signals

Let us, define a discrete time signal x such that it has following properties

  • Real valued
  • Finite length
  • One dimensional

x has [math]\displaystyle{ N \times 1 }[/math] column vector in R N with elements x[n], n=1,2,....N. To define signal in [math]\displaystyle{ R\lt sup\gt N }[/math] we define [math]\displaystyle{ N \times 1 }[/math] vectors [math]\displaystyle{ \psi^{N}_{i} }[/math]. To keep it simple assume that basis is orthonormal. Using [math]\displaystyle{ N \times N }[/math] basis matrix [math]\displaystyle{ \psi = [\psi_1|\psi_2|......|\psi_N] }[/math] , Therefore signal x can be expressed as
[math]\displaystyle{ x=\sum_{i=1}^N s_{i} \psi_{i} }[/math] or [math]\displaystyle{ x=\psi s }[/math]
Where s is [math]\displaystyle{ N \times 1 }[/math] column vector of weighting coefficients
[math]\displaystyle{ s_i=\langle x,\psi_i \rangle=\psi^{T}_i x }[/math]
The signal is K-sparse if it is a linear combination of only K basis vector. The signal will be compressible if the above representation has just few large coefficient and many small coefficients. Please note that the above definition has following inefficiency

  • The initial number of samples may be very large even if desired K is small.
  • Set of all N transform coefficient [math]\displaystyle{ s_i }[/math] must be calculated eventhough all but K of them will be discarded.
  • Localization of the large coefficients must be encoded, thus introducing overhead.

The problem statement for compressive sensing problem

Compressive sensing directly acquires the compressed signal without going through the intermediate stage of acquiring N samples. In order to achieve that a general linear measurement process is defined such that it computes [math]\displaystyle{ M \lt N }[/math] inner products between x and a collection of vectors [math]\displaystyle{ \{\phi \}_{j=1}^M }[/math] as in
[math]\displaystyle{ y_{j}=\langle x, \phi_j \rangle }[/math]
Ordering the new measurement vectors [math]\displaystyle{ y_j }[/math] in a [math]\displaystyle{ M\times 1 }[/math] vector y and the measurement vectors [math]\displaystyle{ \phi_{j}^{T} }[/math] as rows in an [math]\displaystyle{ M \times N }[/math] matrix [math]\displaystyle{ \phi }[/math]. then using previous definition we have
[math]\displaystyle{ y=\phi x=\phi \psi s=\theta s }[/math]
Where[math]\displaystyle{ \theta=\phi \psi }[/math] is an [math]\displaystyle{ M \times N }[/math] matrix. This measurement process in nonadaptive which implies that [math]\displaystyle{ \phi }[/math] is fixed and does not depend on the signal x. This now leads to following condition for designing

  • Constructing a stable measurement matrix [math]\displaystyle{ \phi }[/math] by preserving the information while reducing the dimension from N to M.
  • Designing a reconstructing algorithm such that it recovers x from y by using only [math]\displaystyle{ M \approx K }[/math] measurements.

Constructing a Stable Measurement Matrix