# independent Component Analysis: algorithms and applications

## Motivation

Imagine a room where two people are speaking at the same time and two microphones are used to record the speech signals. Denoting the speech signals by $s_1(t) \,$ and $s_2(t)\,$ and the recorded signals by $x_1(t) \,$ and $x_2(t) \,$, we can assume the linear relation $x = As \,$, where $A \,$ is a parameter matrix that depends on the distances of the microphones from the speakers. The interesting problem of estimating both $A\,$ and $s\,$ using only the recorded signals $x\,$ is called the cocktail-party problem, which is the signature problem for ICA.

## Introduction

ICA shows, perhaps surprisingly, that the cocktail-party problem can be solved by imposing two rather weak (and often realistic) assumptions, namely that the source signals are statistically independent and have non-Gaussian distributions. Note that PCA and classical factor analysis cannot solve the cocktail-party problem because such methods seek components that are merely uncorrelated, a condition much weaker than independence. The independent assumption gives us an advantage that singals obtained form non-linear transformation of the source signals are uncorrelated While it is not true when source signals are merely uncorrelated. These two assumptions also give us an objective in finding matrix $\ A$, that is, we want to find components which are as statistically independent and non-Gaussian as possible.

ICA has a lot of applications in science and engineering. For example, it can be used to find the original components of brain activity by analyzing electrical recordings of brain activity given by electroencephalogram (EEG). Another important application is to efficient representations of multimedia data for compression or denoising.

Relationship with Dimension Reduction<ref>A. Hyvärinen, J. Karhunen, E. Oja (2001): Independent Component Analysis, New York: Wiley, ISBN 978-0-471-40540-5 Introductory chapter</ref>
Suppose we have $n$ oberserved signals $\ x_i$ where $\ i=1,...,n$ from mixing $\ m$ source signals $\ y_i$, where$\ i=1,...,m$,
we want to find such a transformation matrix $\ W$, that for a given number of dimensions $\ d$
$\ y'=Wx$, where $\ y'$ is a $\ d \times 1$ vector.
the transformed variable $\ y'_i$ is considered the component explaning the essential structure of the observed data. These components should contain as much as possible information of the observed data.

Concerns
The cocktail-party problem or blind source separation problem means that we don't have information about the source signal. In the ICA setting, it seems that the number of observed signals and the number of source signals are equal. However, in general, the number of sensors could be less than the number of sources. In an extreme case, we can have only one sensor but several sources. For example, we can have one microphone recording two speeches. Given a mixed signal, could we separate it? This is one of the applicaitons of the paper by Francis R. Bach and Michael I. Jordan Learning Spectral Clustering, With Application To Speech Separation . One of the concern of ICA is that if this is the case, where the matrix $\ A$ is not square, can it demixs the siganls? The other is that if the observed signals are quite different from each other, will it cause difficulty in applying ICA?

### Definition of ICA

The ICA model assumes a linear mixing model $x = As \,$, where $x \,$ is a random vector of observed signals, $A \,$ is a square matrix of constant parameters, and $s \,$ is a random vector of statistically independent source signals. Each component of $s$ is a source signal. Note that the restriction of $A \,$ being square matrix is not theoretically necessary and is imposed only to simplify the presentation. Also keep in mind that in the mixing model we do not assume any distributions for the independent components.

### Ambiguities of ICA

Because both $A \,$ and $s \,$ are unknown, it is easy to see that the variances, the sign or the order of the independent components cannot be determined. That's because any change to the scale or order of sources can be canceled by appropriate changes to the matrix $A$. Fortunately such ambiguities are often insignificant in practice and ICA can as well just fix the sign and assume unit variance of the components.

### Why Gaussian variables are forbidden

In this section we show that ICA cannot resolve independent components which have Gaussian distributions.

To see this, assume that the two source signals $s_1 \,$ and $s_2 \,$ are Gaussian and the mixing matrix $A\,$ is orthogonal. Then the observed signals $x_1 \,$ and $x_2 \,$ will have joint density given by $p(x_1,x_2)=\frac{1}{2 \pi}\exp(-\frac{x_1^2+x_2^2}{2})$, which is rotationally symmetric. In other words, the joint density is be the same for any orthogonal mixing matrix. This means that in the case of Gaussian variables, ICA can only determine the mixing matrix up to an orthogonal transformation.

The fact that ICA cannot be used on Gaussian variables is a primary reason of ICA's late emergence in the research literature because classical factor analysis assumes Gaussian random variables.
In the real world, we may face a distribution close to the Gaussian distribution such as Student t distribution. The question is what will happen to the ICA in these situations? If it cannot resolve these problems, isn't it too restrictive?

### Independence versus uncorrelatedness

Two random variables $\,y_1, y_2$ are independent if information on $\,y_1$ doesn't give any information on $\,y_2$, and vice versa. In math words, $\,y_1, y_2$ are independent if the joint probability density function can be written as the multiplication of each probability denity function:
$\,p(y_1, y_2) = p_1(y_1)*p_2(y_2)$

An important property of independent variables $y_1$ and $y_2$ is that for any two functions $h_1$ and $h_2$ we have $E\{h_1(y_1)h_2(y_2)\}=E\{h_1(y_1)\}E\{h_2(y_2)\}$.

Two random variables $\,y_1, y_2$ are uncorrelated if the covariance is zero:
$\,E(y_1y_2) - E(y_1)E(y_2)=0$

Independence is a much stronger requirement than uncorrelatedness. Of particular interest to ICA theory is the following two results which show that with additional assumptions, uncorrelatedness is equivalent to independence.

Result 1: Two random variables $X \,$ and $Y \,$ are independent if and only if any bounded continuous functions of $X \,$ and $Y \,$ are uncorrelated.

Result 2: Two Gaussian random variables $X \,$ and $Y \,$ are independent if and only if they are uncorrelated.

### Data Whitening

Data whitening is a transformation to change the covariance matrix of a set of samples into the identity matrix. In other words, it decorrelates the random variables of the samples. These random variables have the same variance as the originals.

## Fuether preprocessing

The success of ICA for a given data set may depend crucially on performing some application-dependent preprocessing steps. For example, if the data consists of time-signals, some band-pass filtering may be very useful. Note that if we filter linearly the observed signals $x_i^*(t)$, the ICA model still holds for $x_i^*(t)$, with the same maixing matrix.

This can be seen as follows. Denote by $\textbf{X}$ the matrix that contains the observations $\textbf{x(1), x(2), ..., x(T)}$ as its columns, and similarly for $\textbf{S}$. Then the ICA model can be expressed as:

$\textbf{X=AS}$

No, time filtering of $\textbf{X}$ corresponds to multiplying $\textbf{X}$ from the right by a matrix, let us call it $\textbf{M}$. This gives

$\bold{X^{*} = XM = ASM = AS^{*}}$

showing that the ICA model remains valid.

### ICA Estimation Principles

#### Principle 1: Nonlinear decorrelation

From the above discussion, we see that we can estimate the mixing matrix $A \,$ by finding a matrix $W \,$ such that for any $i \neq j \,$, and suitable nonlinear functions $g \,$ and $h \,$, $g(y_i) \,$ and $h(y_j) \,$ are uncorrelated.

#### Principle 2: Maximizing Non-gaussanity

Loosely speaking, the Central Limit Theorem says that the sum of identically distributed non-gaussian random variables are closer to gaussian than the original ones. Because of this, any mixing of the identically distributed non-gaussian independent components would be more gaussian than the original signals $s \,$. Using this observation, we can find the original signals from the observed signals $x \,$ as follows: find the weighting vectors $w \,$ such that the $w^T x \,$ are the most non-gaussian.

## Measures of non-Gaussianity

### kurtosis

Kurtosis is the classical measure of non-Gaussianity which is defined by $kurt(y) = E\{y^4\} - 3(E\{y^2\})^2. \,$. Positive kurtosis typically implies a spiky pdf near zero and heavy tails at the two ends. (e.g. Laplace distribution); Negative kurtosis typically implies a flat pdf which is rather constant near zero, and very small at the two ends. (e.g. uniform distribution with finite support)

As a computational measure for non-gaussanity, kurtosis, on one hand, has the merit that it is easy to compute and has nice linearity properties. On the other hand, it is non-robust because kurtosis for a large sample size can be significantly affected by a few outliers in the sample.

### negentropy

#### Intuitive explanation

Before understanding negentropy, we have to first understand entropy, which is a key concept in information theory. Loosely speaking, entropy is a measure of how "distributed" a random variable is, and a rule of thumb is that a "more distributed" pdf has a higher entropy. An important theorem in information theory states that the Gaussian distribution has the largest entropy among all distributions with the same variance. In informal language, this means the Gaussian distribution is the most "distributed" pdf. Negentropy measures non-gaussianity by the differences in entropy of a pdf with the corresponding Gaussian distribution - this would be make precise in the following technical explanation.

#### Technical explanation

The entropy of a discrete random variable $X \,$ with possible values $\{x_1, x_2, ..., x_n\} \,$ is defined as $H(X) = -\sum_{i=1}^n {p(x_i) \log p(x_i)}$

The (differential) entropy of a continuous random variable $X \,$ with probability density function $f \,$ is similarly defined as $H[X] = -\int\limits_{-\infty}^{\infty} f(x) \log f(x)\, dx$

It is obvious how the definition of differential entropy can be extended to higher dimensions.

For a random vector $y\,$ with covariance matrix $C \,$, its negentropy is defined as $J(y) = H(Gaussian_C) - H(y) \,$, where $Gaussian_C \,$ denotes the Gaussian distribution with covariance matrix $C \,$. Note that Negentropy is always non-negative and equals zero for a Gaussian distribution.

#### Empirical estimation of negentropy

In practice, negentropy has to be estimated from a finite sample. There are two main ways to do this. The first approach is to Taylor expand negentropy and take the lower order-terms. This would result in an estimation of negentropy expressed in higher moments(3rd degree and higher) of the pdf. As the estimation involves higher moments, this suffers from the same non-robustness problem faced by kurtosis. The second, and more robust, approach finds the distribution with the maximum entropy that is compatible with the observed sample, and estimates the negentropy of the real (and unknown) distribution by the negentropy of the "entropy-maximizing" distribution. While the second approach is more robust, it is also more computationally involved.

## A brief history of ICA

The technique of ICA was first introduced in 1982 in a simplified model of motion coding in muscle contraction, where the original signals were the angular position and velocity of a moving joint and the observed signals were the measurements from two types of sensors measuring muscle contraction. Throughout the 1980s, ICA was mostly known among French researchers but not among the international research community. A lot of ICA algorithms got developed since early 1990s, though ICA still remained a small and narrow research area until mid-1990s. The breakthrough happened between mid-1990s and late-1990s during which a number of very fast ICA algorithms, of which FastICA was one, were developed so that ICA can be applied to large scaled problem. After 2000, a lot of international workshops and papers have been devoted to ICA research and ICA has now become an established and mature field of research.

## Kernel ICA <ref> Bach and Jordan,(2002); Kernel Independent Component Analysis. Journal of Machine Learning Research, 3; 1-48</ref>

Bach and Jordan (2002) extended the ICA to functions in Reproducing kernel Hilbert Space (RKHS) rather than a single nonlinear function; as it was considered in the earliest works. To do so, they used Canonical Correlation - correlation of feature maps of multivariate random variable using kernel associated with the RKHS - rather than direct correlation of the considered random variables.

## Applications

ICA has been applied to lins source separation problem in signal processing but tit is also an important research topic in many areas such as biomedical engineering,medical imaging, speech enhancement,remote sensing, communication systems, exploration seismology, geophysics, econometrics, data mining, etc.

### Finding hidden factors in financial data

Suppose we have the cashflow of several stores belonging to the same retail chain. The goal is to find the fundamental factors that are common to all stroes and affect the cashflow. In this case, factors like seasonal variation, and prize changes of various coomodities affects on all stores independently. This is a work from Kiviluoto and Oja(1998)<ref>Kiviluoto, K. & Oja, E. (1998). Independent component analysis for parallel financial time series. Proceeding of the international comference on neural information processing(ICONIP'98) Vol.2 (pp. 895-898). Tokyo, Japan.</ref> applying ICA on the cashflow problem. However, I (as a general contributor) am still thinking that factor independency is a strong and so unreal assumption in this particular case. Imagine a case where prize changes of variuos commodities is mixed with seasonal variations.