Difference between revisions of "adaptive dimension reduction for clustering high dimensional data"

From statwiki
Jump to: navigation, search
(5. Adaptive Dimension Reduction for K-means)
(5. Adaptive Dimension Reduction for K-means)
Line 70: Line 70:
 
K-means clustering can be viewed as a special case of EM with three simplifications (i) <math>\sigma_1 = \cdot \cdot \cdot =\sigma_k=\sigma</math>; (ii) <math> \pi_1=\cdot \cdot \cdot =\pi_k</math>; (iii) with <math>\sigma \rightarrow 0 </math>
 
K-means clustering can be viewed as a special case of EM with three simplifications (i) <math>\sigma_1 = \cdot \cdot \cdot =\sigma_k=\sigma</math>; (ii) <math> \pi_1=\cdot \cdot \cdot =\pi_k</math>; (iii) with <math>\sigma \rightarrow 0 </math>
  
 +
In K-means clustering, we use hard group assignment, while in mixture of Gaussians we use soft/probabilstic group assignment. 
 
----
 
----

Revision as of 00:03, 29 July 2013

1. Introduction

Clustering methods such as the K-means and EM suffer from local minima problems. In high dimensional space, the cost function surface is very rugged and it is easy to get trapped somewhere close to the initial configurations. The conventional method is to try a number of initial values, and pick up the best one of the results.

The Paper proposes a approach to tackle this problem. The approach utilizes the idea of dimension reduction. In the paper, (i) they approach dimension reduction as a dynamic process that is adaptively adjusted and integrated with clustering process (ii) make effective use of cluster membership to connect reduced dimensional space and full dimension space.

The paper focuses on K-means and EM algorithms using mixture model of spherical Gaussian components, which retain identical model parameters in reduced low-dimensional subspace as in high dimensional space.

2. Effective Dimension for Clustering

The idea is to perform clustering in low dimensional subspaces. Low dimensional subspace is interpreted as containing relevant attributes( linear combinations of coordinates). The dimensionality r of subspace should satisfy: [math] y \leq K-1[/math] based on linear discriminant analysis. Thus the effective clustering dimensions for the K spherical Gaussians are spanned by the K centers [math] \mu_1, ...,\mu_k[/math], for which r=K-1. We call the relevant dimensions passing through all K centers to the r-dim subspace. The effective dimensionality of the relevant subspace could be less than K-1. This happens when the K cluster centers lie in a subspace with dimension [math]r\lt K-1[/math]

3. EM in relevant subspace

The idea: the irrelevant dimensions can be integrated out, and the resulting marginal distribution follows the same Gaussian mixture functional form. Then it can freely move between subspace and original space. Knowing them in the subspace, we can directly infer the centers in the original space. Assume the following mixture model:

[math]p(x)=\pi_1g_1^d(x-\mu_1)+...+\pi_kg_k^d(x-\mu_k)[/math]

Where each component is a spherical Gaussian distribution,

[math]g_k^d(x)=\frac{1}{(\sqrt{2\pi}\sigma_k)^d} exp(-\frac{||x-\mu_k||^2}{2\sigma_k^2})[/math] and x, ,[math] \mu_k[/math] are vectors in d-dim space. It can be denoted as [math] N^{(d)}(\mu_k,\sigma_k) [/math] . Two invariant properties of spherical Gaussian functions: (i) Remain invariant under any orthogonal coordinate rotation operation [math] R: x\leftarrow Rx: g_k^d(Rx|R\theta)=g_k^d(x|\theta)[/math] (ii) coordinate translation(shift) operation L: [math] g_k^d(Lx|L\theta)=g_k^d(x|\theta)[/math]


Theorem 1:

In Em clustering using spherical Gaussian mixture models in d-dimensions, after integrating out irrelevant dimensions, the marginal probability becomes

[math] p(y)=\pi_1g_1^r(y-\upsilon_1)+ \cdot \cdot \cdot +\pi_kg_k^r(y-\upsilon_k) [/math]

Exactly the same type of Gaussian distribution as in r-dim space.

4. Adaptive Dimension Reduction for EM

the centers (or centroids in the K-means) obtained in clustering in the r-dim subspace can be uniquely traced back to the original d-dim space by using the cluster membership of each data point. This observation is the basis of our ADR-EM clustering.

The cluster membership information is contained in the posterior probability [math]h_i^k[/math],
[math]h_i^k = Pr(c_i =k | y_i, \theta)[/math]
This measures the probability of point [math]i[/math] belongs to cluster [math]c_k[/math] given current model (parameters) and the evidence (value of [math]y_t[/math]).

The EM algorithm is as follows:

(i) Inialize model parameters [math] \pi_k, \upsilon_k, \sigma_k[/math]

(ii) compute [math] h_i^k [/math]

(iii) Update.

compute the number of points belonging to cluster [math] c_k : n_k=\sum_i h_i^k[/math]

update priors: [math] \pi_k=n_k/N [/math]

update centers: [math] \upsilon_k=\sum_i h_i^ky_i/n_k[/math]

update covariances:[math] \sigma_k=\sum_i h_i^k||y_i-\mu_k||^2/rn_k[/math]

(iv) steps 2 and 3 are repeated until convergence

The complete ADR-EM algorithm

(i) Center the data . Rescale the data such that[math]\Sigma=I[/math]. Choose appropriate K as input parameter. Choose dimensionality r for reduced subspace. In general, r=K-1 is recommended, but r=k or [math] r \lt K-1[/math] are also appropriate

(ii) Do the first dimension reduction using PCA or any other methods, including random starts

(iii)Run EM in the r-dim subspace to obtain clusters. Use cluster membership to construct cluster centroids in original space. Check convergence. If yes, go to step 5

(iv) Compute the new r-dim subspace spanned by the K centroids using either SVD or QR basis Project data into this new subspace. Go to step 3

(v) Output results and converting posterior probabilities to discrete indicators. The relevant attributes are also identified.

5. Adaptive Dimension Reduction for K-means

K-means clustering can be viewed as a special case of EM with three simplifications (i) [math]\sigma_1 = \cdot \cdot \cdot =\sigma_k=\sigma[/math]; (ii) [math] \pi_1=\cdot \cdot \cdot =\pi_k[/math]; (iii) with [math]\sigma \rightarrow 0 [/math]

In K-means clustering, we use hard group assignment, while in mixture of Gaussians we use soft/probabilstic group assignment.