independent Component Analysis: algorithms and applications: Difference between revisions
Line 34: | Line 34: | ||
====Principle 2: Maximizing Non-gaussanity==== | ====Principle 2: Maximizing Non-gaussanity==== | ||
Loosely speaking, the Central Limit Theorem says that the sum of non-gaussian random variables are closer to gaussian than the original ones. Because of this, any mixing of the non-gaussian indepedent components would be more gaussian than the original signals <math> s \,</math>. Using this observation, we can find the original signals from the observed signals <math>x \,</math> as follows: find the weighting vectors <math>w</math> such that the <math>w^T x \,</math> are the most non-gaussian. | Loosely speaking, the Central Limit Theorem says that the sum of non-gaussian random variables are closer to gaussian than the original ones. Because of this, any mixing of the non-gaussian indepedent components would be more gaussian than the original signals <math> s \,</math>. Using this observation, we can find the original signals from the observed signals <math>x \,</math> as follows: find the weighting vectors <math>w \,</math> such that the <math>w^T x \,</math> are the most non-gaussian. | ||
==Measures of non-Gaussianity== | ==Measures of non-Gaussianity== |
Revision as of 16:14, 5 July 2009
Motivation
Imagine a room where two people are speaking at the same time and two microphones are used to record the speech signals. Denoting the speech signals by [math]\displaystyle{ s_1(t) \, }[/math] and [math]\displaystyle{ s_2(t)\, }[/math] and the recorded signals by [math]\displaystyle{ x_1(t) \, }[/math] and [math]\displaystyle{ x_2(t) \, }[/math], we can assume the linear relation [math]\displaystyle{ x = As \, }[/math], where [math]\displaystyle{ A \, }[/math] is a parameter matrix that depends on the distances of the microphones from the speakers. The interesting problem of estimating both [math]\displaystyle{ A\, }[/math] and [math]\displaystyle{ s\, }[/math] using only the recorded signals [math]\displaystyle{ x\, }[/math] is called the cocktail-party problem, which is the signature problem for ICA.
Introduction
ICA shows, perhaps surprisingly, that the cocktail-party problem can be solved by imposing two rather weak (and often realistic) assumptions, namely that the source signals are statistically independent and have non-Gaussian distributions. Note that PCA and classical factor analysis cannot solve the cocktail-party problem because such methods seek components that are merely uncorrelated, a condition much weaker than independence.
ICA has a lot of applications in science and engineering. For example, it can be used to find the original components of brain activity by analyzing electrical recordings of brain activity given by electroencephalogram (EEG). Another important application is to efficient representations of multimedia data for compression or denoising.
Definition of ICA
The ICA model assumes a linear mixing model [math]\displaystyle{ x = As \, }[/math], where [math]\displaystyle{ x \, }[/math] is a random vector of observed signals, [math]\displaystyle{ A \, }[/math] is a square matrix of constant parameters, and [math]\displaystyle{ s \, }[/math] is a random vector of statistically independent source signals. Note that the restriction of [math]\displaystyle{ A \, }[/math] being square matrix is not theoretically necessary and is imposed only to simplify the presentation. Also keep in mind that in the mixing model we do not assume any distributions for the independent components.
Ambiguities of ICA
Because both [math]\displaystyle{ A \, }[/math] and [math]\displaystyle{ s \, }[/math] are unknown, it is easy to see that the variances, the sign or the order of the independent components cannot be determined. Fortunately such ambiguities are often insignificant in practice and ICA can as well just fix the sign and assume unit variance of the components.
Why Gaussian variables are forbidden
In this section we show that ICA cannot resolve independent components which have Gaussian distributions.
To see this, assume that the two source signals [math]\displaystyle{ s_1 \, }[/math] and [math]\displaystyle{ s_2 \, }[/math] are Gaussian and the mixing matrix [math]\displaystyle{ A\, }[/math] is orthogonal. Then the observed signals [math]\displaystyle{ x_1 \, }[/math] and [math]\displaystyle{ x_2 \, }[/math] will have joint density given by [math]\displaystyle{ p(x_1,x_2)=\frac{1}{2 \pi}\exp(-\frac{x_1^2+x_2^2}{2}) }[/math], which is rotationally symmetric. In other words, the joint density is be the same for any orthogonal mixing matrix. This means that in the case of Gaussian variables, ICA can only determine the mixing matrix up to an orthogonal transformation.
The fact that ICA cannot be used on Gaussian variables is a primary reason of ICA's late emergence in the research literature because classical factor analysis assumes Gaussian random variables.
Independence is a much stronger requirement than uncorrelatedness. Of particular interest to ICA theory is the following two results which show that with additional assumptions, uncorrelatedness is equivalent to independence.
Result 1: Two random variables [math]\displaystyle{ X \, }[/math] and [math]\displaystyle{ Y \, }[/math] are independent if and only if any bounded continuous functions of [math]\displaystyle{ X \, }[/math] and [math]\displaystyle{ Y \, }[/math] are uncorrelated.
Result 2: Two Gaussian random variables [math]\displaystyle{ X \, }[/math] and [math]\displaystyle{ Y \, }[/math] are independent if and only if they are uncorrelated.
ICA Estimation Principles
Principle 1: Nonlinear decorrelation
From the above discussion, we see that we can estimate the mixing matrix [math]\displaystyle{ A \, }[/math] by finding a matrix [math]\displaystyle{ W \, }[/math] such that for any [math]\displaystyle{ i \neq j \, }[/math], and suitable nonlinear functions [math]\displaystyle{ g \, }[/math] and [math]\displaystyle{ h \, }[/math], [math]\displaystyle{ g(y_i) \, }[/math] and [math]\displaystyle{ h(y_j) \, }[/math] are uncorrelated.
Principle 2: Maximizing Non-gaussanity
Loosely speaking, the Central Limit Theorem says that the sum of non-gaussian random variables are closer to gaussian than the original ones. Because of this, any mixing of the non-gaussian indepedent components would be more gaussian than the original signals [math]\displaystyle{ s \, }[/math]. Using this observation, we can find the original signals from the observed signals [math]\displaystyle{ x \, }[/math] as follows: find the weighting vectors [math]\displaystyle{ w \, }[/math] such that the [math]\displaystyle{ w^T x \, }[/math] are the most non-gaussian.
Measures of non-Gaussianity
kurtosis
Kurtosis is the classical measure of non-Gaussianity which is defined by [math]\displaystyle{ kurt(y) = E\{y^4\} - 3(E\{y^2\})^2. \, }[/math]. Positive kurtosis typically implies a spiky pdf near zero and heavy tails at the two ends. (e.g. Laplace distribution); Negative kurtosis typically implies a flat pdf which is rather constant near zero, and very small at the two ends. (e.g. uniform distribution with finite support)
As a computational measure for non-gaussanity, kurtosis, on one hand, has the merit that it is easy to compute and has nice linearity properties. On the other hand, it is non-robust because kurtosis for a large sample size can be significantly affected by a few outliers in the sample.