adaptive dimension reduction for clustering high dimensional data

From statwiki
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

1. Introduction

Clustering methods such as the K-means and EM suffer from local minima problems. In high dimensional space, the cost function surface is very rugged and it is easy to get trapped somewhere close to the initial configurations. The conventional method is to try a number of initial values, and pick up the best one of the results.

The Paper proposes a approach to tackle this problem. The approach utilizes the idea of dimension reduction. In the paper, (i) they approach dimension reduction as a dynamic process that is adaptively adjusted and integrated with clustering process (ii) make effective use of cluster membership to connect reduced dimensional space and full dimension space.

The paper focuses on K-means and EM algorithms using mixture model of spherical Gaussian components, which retain identical model parameters in reduced low-dimensional subspace as in high dimensional space.

2. Effective Dimension for Clustering

The idea is to perform clustering in low dimensional subspaces. Low dimensional subspace is interpreted as containing relevant attributes( linear combinations of coordinates). The dimensionality [math]\displaystyle{ r }[/math] of subspace should satisfy: [math]\displaystyle{ r \leq K-1 }[/math] based on linear discriminant analysis. Thus the effective clustering dimensions for the K spherical Gaussians are spanned by the K centers [math]\displaystyle{ \mu_1, ...,\mu_k }[/math], for which [math]\displaystyle{ r=K-1 }[/math]. We call the relevant dimensions passing through all [math]\displaystyle{ K }[/math] centers to the [math]\displaystyle{ r }[/math]-dim subspace. The effective dimensionality of the relevant subspace could be less than [math]\displaystyle{ K-1 }[/math]. This happens when the K cluster centers lie in a subspace with dimension [math]\displaystyle{ r\lt K-1 }[/math]

3. EM in relevant subspace

The idea: the irrelevant dimensions can be integrated out, and the resulting marginal distribution follows the same Gaussian mixture functional form. Then it can freely move between subspace and original space. Knowing them in the subspace, we can directly infer the centers in the original space. Assume the following mixture model:

[math]\displaystyle{ p(x)=\pi_1g_1^d(x-\mu_1)+...+\pi_kg_k^d(x-\mu_k) }[/math]

Where each component is a spherical Gaussian distribution,

[math]\displaystyle{ g_k^d(x)=\frac{1}{(\sqrt{2\pi}\sigma_k)^d} exp(-\frac{||x-\mu_k||^2}{2\sigma_k^2}) }[/math] and x, ,[math]\displaystyle{ \mu_k }[/math] are vectors in d-dim space. It can be denoted as [math]\displaystyle{ N^{(d)}(\mu_k,\sigma_k) }[/math] . Two invariant properties of spherical Gaussian functions: (i) Remain invariant under any orthogonal coordinate rotation operation [math]\displaystyle{ R: x\leftarrow Rx: g_k^d(Rx|R\theta)=g_k^d(x|\theta) }[/math] (ii) coordinate translation(shift) operation L: [math]\displaystyle{ g_k^d(Lx|L\theta)=g_k^d(x|\theta) }[/math]


Theorem 1:

In Em clustering using spherical Gaussian mixture models in d-dimensions, after integrating out irrelevant dimensions, the marginal probability becomes

[math]\displaystyle{ p(y)=\pi_1g_1^r(y-\upsilon_1)+ \cdot \cdot \cdot +\pi_kg_k^r(y-\upsilon_k) }[/math]

Exactly the same type of Gaussian distribution as in r-dim space.

4. Adaptive Dimension Reduction for EM

the centers (or centroids in the K-means) obtained in clustering in the r-dim subspace can be uniquely traced back to the original d-dim space by using the cluster membership of each data point. This observation is the basis of our ADR-EM clustering.

The cluster membership information is contained in the posterior probability [math]\displaystyle{ h_i^k }[/math],

[math]\displaystyle{ h_i^k = Pr(c_i =k | y_i, \theta) }[/math]

This measures the probability of point [math]\displaystyle{ i }[/math] belongs to cluster [math]\displaystyle{ c_k }[/math] given current model (parameters) and the evidence (value of [math]\displaystyle{ y_t }[/math]).

The EM algorithm is as follows:

(i) Inialize model parameters [math]\displaystyle{ \pi_k, \upsilon_k, \sigma_k }[/math]

(ii) compute [math]\displaystyle{ h_i^k }[/math]

(iii) Update.

compute the number of points belonging to cluster [math]\displaystyle{ c_k : n_k=\sum_i h_i^k }[/math]

update priors: [math]\displaystyle{ \pi_k=n_k/N }[/math]

update centers: [math]\displaystyle{ \upsilon_k=\sum_i h_i^ky_i/n_k }[/math]

update covariances:[math]\displaystyle{ \sigma_k=\sum_i h_i^k||y_i-\mu_k||^2/rn_k }[/math]

(iv) steps 2 and 3 are repeated until convergence

The complete ADR-EM algorithm

(i) Center the data . Rescale the data such that[math]\displaystyle{ \Sigma=I }[/math]. Choose appropriate K as input parameter. Choose dimensionality r for reduced subspace. In general, r=K-1 is recommended, but r=k or [math]\displaystyle{ r \lt K-1 }[/math] are also appropriate

(ii) Do the first dimension reduction using PCA or any other methods, including random starts

(iii)Run EM in the r-dim subspace to obtain clusters. Use cluster membership to construct cluster centroids in original space. Check convergence. If yes, go to step 5

(iv) Compute the new r-dim subspace spanned by the K centroids using either SVD or QR basis Project data into this new subspace. Go to step 3

(v) Output results and converting posterior probabilities to discrete indicators. The relevant attributes are also identified.

5. Adaptive Dimension Reduction for K-means

K-means clustering can be viewed as a special case of EM with three simplifications (i) [math]\displaystyle{ \sigma_1 = \cdot \cdot \cdot =\sigma_k=\sigma }[/math]; (ii) [math]\displaystyle{ \pi_1=\cdot \cdot \cdot =\pi_k }[/math]; (iii) with [math]\displaystyle{ \sigma \rightarrow 0 }[/math]


In K-means clustering, we use hard group assignment, while we use soft/probabilstic group assignment in mixture of Gaussians.

Thorem 2:

Suppose we somehow know the correct r-dim relevant subspace defined by [math]\displaystyle{ R_r }[/math]. Let [math]\displaystyle{ Y=R_r^TX=R_r^T(x_1,\cdot\cdot\cdot, x_n) }[/math]and [math]\displaystyle{ C_{\upsilon} =[\upsilon_1, \cdot \cdot \cdot, \upsilon_k] }[/math] be K centroids in r-dim subspace. Solve the K-means problem in r-dim subspace,

[math]\displaystyle{ \underset{C_v}{\text{minimize}} J_r(Y,C_{\upsilon}) }[/math]

Use the cluster membership [math]\displaystyle{ H=(h_i^k) }[/math] obtained to reconstruct the K centres [math]\displaystyle{ C_{\mu}^* = [ \mu_1^*, \cdot \cdot \cdot , \mu_k^*] }[/math] in the full dimensional space. Then [math]\displaystyle{ C_{\mu}^* }[/math] are the exact optimal solution to the full-dimension K-means problem.

By Theorem 2, we only need to find the relevant subspace, which is much easier than finding [math]\displaystyle{ C_\mu^* }[/math] directly. This is the usefulness of Theorem2. The adaptive dimension reduction K-means is based on the theorem. The complete ADR -Kmeans algorithm is identical to ADR-EM algorithm.


6. Experiments

They perform three experiments to assess the performance of the method.

The first experiment is based on synthetic data of 3 overlapping Gaussian clusters in 4-dimensional space. It is shown that the method can successfully find the clusters.

The second experiment is done on a microarray gene expression dataset. The challenging feature of this dataset is the high dimensionality of the data. The mehtod is shown to cluster most of the points properly.

The last experiment is performed for internet newsgroup clustering. It is shown that method works better than a simple k-means clustering in the original space.

7. Discussion

The algorithm in the paper is proposed to overcome the dilemma that the cluster centers in the low-dimension space using PCA or other dimension reduction methods are not the optimal cluster centers in the original space, while performing heuristic optimization methods such as K-means or EM in the original high-dimensional space are easy to be trapped in the local minima. In the paper, the optimal subspace for clustering is defined as the K-1 dimensional space spanned by K cluster centers assumed the cluster centers are optimal. An iterative algorithm of joint unsupervised dimension reduction and clustering is developed. There are two things that might require to be considered.

Question: While the standard EM algorithm is proved to be converged and has good statistic meaning, the proposed algorithm in the paper is not guaranteed to be converged and even if it is finally converged, the solution is not guaranteed to be close to global optimum.

Hi Cheng Huan:

As for the question mentioned above, I think the adaptive method can guarantee local optimum. Theorem 1) and 2) are the keys to the effectiveness of this adaptive method. Theorem 1) converts the convergence of EM in high dimension space to the convergence in reduced lower dimension space. As the function become more smooth in lower dimension space, the adaptive method reduces the risk of trapping in local optimum that is not globally optimized. I am not sure why you conclude this method cannot guarantee local optimum. If you see my reply, could you please explain your concerns about the divergence of this adaptive method? Thanks.

Victor Zikun Xu