learning Spectral Clustering, With Application To Speech Separation

From statwiki
Revision as of 17:56, 2 July 2009 by R2jiwraj (talk | contribs) (Clustering)
Jump to: navigation, search

Introduction

The paper <ref>Francis R. Bach and Michael I. Jordan, Learning Spectral Clustering,With Application To Speech Separation, Journal of Machine Learning Research 7 (2006) 1963-2001.</ref> presented here is about spectral clustering which makes use of dimension reduction and learning a similarity matrix that generalizes to the unseen datasets when spectral clustering is applied to them.

Clustering

Clustering refers to partition a given dataset into clusters such that data points in the same cluster are similar and data points in different clusters are dissimilar. Similarity is usually measured over distance between data points.

Formally stated, given a set of data points [math]X=\{{{\mathbf x}}_1,{{\mathbf x}}_2,\dots ,{{\mathbf x}}_P\}[/math], we would like to find [math]K[/math] disjoint clusters [math]{C{\mathbf =}\{C_k\}}_{k\in \{1,\dots ,K\}}[/math] such that [math]\bigcup{C_k}=X[/math], that optimizes a certain objective function. The dimensionality of data points is [math]D[/math], and [math]X[/math] can be represented as a matrix [math]{{\mathbf X}}_{D\times P}[/math].The similarity matrix that measures the similarity between each pair of points is denoted by [math]{{\mathbf W}}_{P\times P}[/math]. This similarity measure is large for the points within the same cluster and small for points in different clusters. W has non neagtve elemnts and is symmetric. A classical similarity matrix for clustering is the diagonally-scaled Gaussian similarity, defined as
[math]\mathbf W(i,j)= \rm exp (-(\mathbf{x}_i-\mathbf{x}_j)^{\rm T}Diag(\boldsymbol{\alpha})(\mathbf{x}_i-\mathbf{x}_j) ) [/math]
where [math]{\mathbf \boldsymbol{\alpha} }\in {{\mathbb R}}^D[/math] is a vector of positive parameters, and [math]\rm Diag(\boldsymbol{\alpha} )[/math] denotes the [math]D\times D[/math] diagonal matrix with diagonal [math]{\boldsymbol{\alpha} }[/math].

Objective functions

Objective function for K-means clustering

Given the number of clusters [math]K[/math], it aims to minimize an objective function (sum of within-cluster distance) over all clustering scheme [math]C[/math].
[math]\mathop{\min_C} J=\sum^K_{k=1}\sum_{\mathbf x \in C_k}\|\mathbf x - \boldsymbol{\mu}_k\|^2[/math]
[math]{\boldsymbol{\mu}}_k{\mathbf =}\frac{{\mathbf 1}}{\left|C_k\right|}\sum_{{\mathbf x}\in C_k}{{\mathbf x}}[/math] is the mean of the cluster [math]C_k[/math]

Min cut

For two subsets of [math]A,B\subset X[/math], we define
[math]cut(A,B)=\sum_{{\mathbf x}_i \in A}\sum_{{\mathbf x}_j \in B}\mathbf W (i,j)[/math]
Mincut is the sum of inter-cluster weights.
[math]Mincut(C)=\sum^K_{k=1} cut(C_k,X \backslash C_k)[/math]

Normalized cut

The normalized cut in the paper is defined as
[math]Ncut(C)=\sum^K_{k=1}\frac{cut(C_k,X\backslash C_k)}{cut(C_k,X)}=\sum^K_{k=1}\frac{cut(C_k,X)-cut(C_k,C_k)}{cut(C_k,X)}=K-\sum^K_{k=1}{\frac{cut(C_k,C_k)}{cut(C_k,X)}}[/math]
Normalized cut takes a small value if the clusters [math]C_k[/math] are not too small <ref> Ulrike von Luxburg, A Tutorial on Spectral Clustering, Technical Report No. TR-149, Max Planck Institute for Biological Cybernetics.</ref> as measured by the intra-cluster weights. So it tries to achieve balanced clusters. It is unlikely that we will have clusters containing one data point.

The matrix representation of Normalized cut

Let [math]{{\mathbf e}}_k\in {\{0,1\}}^P[/math] be the indicator vector for cluster [math]C_k[/math], where the non-zero elements indicate the data points in cluster [math]C_k[/math]. Therefore, knowing [math]{\mathbf E}{\mathbf =}({{\mathbf e}}_1,\dots ,{{\mathbf e}}_K)[/math] is equivalent to know clustering scheme [math]C[/math]. Further let [math]{\mathbf D}[/math] denotes the diagonal matrix whose [math]i[/math]-th diagonal element is the sum of the elements in the [math]i[/math]-th row of [math]{\mathbf W}[/math], that is, [math]{\mathbf D}{\mathbf =}{\rm Diag(}{\mathbf W}\cdot {\mathbf 1}{\rm )}[/math], where [math]{\mathbf 1}[/math] is defined as the vector in [math]{\{1\}}^P[/math].
So the normalized cut can be written as
[math]Ncut(C)=C(\mathbf{W,E})=\sum^K_{k=1}\frac{{\mathbf e}^{\rm T}_k (\mathbf{D-W}){\mathbf e}_k}{{\mathbf e}^{\rm T}_k (\mathbf{D}){\mathbf e}_k}=K-tr(\mathbf {E^{\rm T} W E}(\mathbf {E^{\rm T} D E})^{-1})[/math]

Spectral Clustering

Solving the problem of Normalized cut is NP-hard, so we turn to the relaxed version of it.

Theorem 1

Minimizing normalized cut over all [math]C[/math] is equivalent to the following optimization problem (refer as original optimization problem).
[math]\mathop{\min_{\mathbf Y}}K-tr(\mathbf{Y^{\rm T}(D^{\rm{1/2}}WD^{\rm{1/2}})Y})[/math]
subject to
[math]{\mathbf Y}{\mathbf =}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf E}{\mathbf \Lambda }[/math] (1a)
and
[math]{{\mathbf Y}}^{{\rm T}}{\mathbf Y}{\mathbf =}{\mathbf I}[/math] (1b)
Where [math]{\mathbf \Lambda }\in {{\mathbb R}}^{K\times K},{\mathbf Y}\in {{\mathbb R}}^{P\times K}[/math]
In other words, given [math]{\mathbf E}[/math] and let[math]\mathbf{\Lambda =(E^{\rm T} D E)^{\rm{1/2}}}[/math], we can form a candidate solution [math]{\mathbf Y}{\mathbf =}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf E}{\left({{\mathbf E}}^{{\rm T\ }}{\mathbf {D E}}\right)}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}[/math] for the above optimization problem.

Relaxed optimization problem

Since minimizing normalized cut is NP-hard problem, its equivalent optimization problem is NP-hard too. However, by removing the constraint (1a) in Theorem 1, a relaxed problem is obtained.


[math]\mathop{\min_{\mathbf Y}}K-tr(\mathbf{Y^{\rm T}(D^{\rm{-1/2}}WD^{\rm{-1/2}})Y})[/math]
subject to
[math]{{\mathbf Y}}^{{\rm T}}{\mathbf Y}{\mathbf =}{\mathbf I}[/math]
Where [math]{\mathbf Y}\in {{\mathbb R}}^{P\times K}[/math]

Theorem 2

The maximum of [math]tr\left({{\mathbf Y}}^{{\rm T}}\left({{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}{\mathbf W}{{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}\right){\mathbf Y}\right)[/math] over matrices [math]{\mathbf Y}\in {{\mathbb R}}^{P\times K}[/math] such that [math]{{\mathbf Y}}^{{\rm T}}{\mathbf Y}{\mathbf =}{\mathbf I}[/math] is the sum of the [math]K[/math] largest eigenvalues of [math]\tilde{{\mathbf W}}={{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}{\mathbf W}{{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}[/math]. It is attained at all [math]{\mathbf Y}[/math] of the form [math]{\mathbf Y}{\rm =}{\mathbf U}{{\mathbf B}}_{{\rm 1}}[/math] where [math]{\mathbf U}\in {{\mathbb R}}^{P\times K}[/math] is any orthonormal basis of the [math]K[/math]-th principal subspace of [math]{{\tilde{\mathbf {W}}}}[/math] and [math]{{\mathbf B}}_{{\rm 1}}[/math] is an arbitrary orthogonal matrix in [math]{{\mathbb R}}^{K\times K}[/math].


Since [math]{{\mathbf B}}_{{\mathbf 1}}[/math] is an arbitrary orthogonal matrix, let it be an identity matrix, so we have [math]{{\mathbf Y}}_{{\mathbf {opt}}}{\rm =}{\mathbf U}[/math]
The optimal solution[math]{\mathbf \ }{{\mathbf Y}}_{{\mathbf {opt}}}[/math] for the relaxed problem in general does not satisfy the constraint (1a), therefore we would like to find a candidate solution of the original optimization problem which is close to the optimal solution [math]{{\mathbf Y}}_{{\mathbf {opt}}}[/math]. This is called rounding.

Rounding

We know that, given a partition [math]{\mathbf E}[/math], a candidate solution of the original optimization problem can be found as [math]{{\mathbf Y}}_{{\mathbf {part}}}{\mathbf =}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf E}{\left({{\mathbf E}}^{{\rm T\ }}{\mathbf DE}\right)}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}[/math]. Therefore we want to compare [math]{{\mathbf Y}}_{{\mathbf {opt}}}[/math] with [math]{{\mathbf Y}}_{{\mathbf {part}}}[/math]. Since both [math]{{\mathbf Y}}_{{\mathbf {opt}}}[/math] and [math]{{\mathbf Y}}_{{\mathbf {part}}}[/math] are orthogonal matrices and their column vectors span the [math]K[/math] dimension subspaces, it makes sense to compare [math]{{\mathbf Y}}_{{\mathbf {opt}}}[/math] and [math]{{\mathbf Y}}_{{\mathbf {part}}}[/math] by comparing the subspaces spanned by their column vectors. One of the common way is to compare the subspaces is to compare the orthogonal projections on those subspaces. That is, to compute the Frobenius norm between [math]{{\mathbf Y}}_{{\mathbf {opt}}}{{\mathbf Y}}^{{\rm T}}_{{\mathbf {opt}}}[/math] and [math]{{\mathbf Y}}_{{\mathbf {part}}}{{\mathbf Y}}^{{\rm T}}_{{\mathbf {part}}}[/math].
The cost function or the difference between [math]{{\mathbf Y}}_{{\mathbf {opt}}}[/math] and [math]{{\mathbf Y}}_{{\mathbf {part}}}[/math] is defined as
[math]J_1(\mathbf{W,E})=\frac {1} {2} \|\mathbf{Y^{\rm T}_{opt} Y_{opt}-Y^{\rm T}_{part} Y_{part}} \|^2_F=\frac {1} {2} \|\mathbf{U^{\rm T} U-D^{\rm {-1/2}}E(E^{\rm T}DE)^{\rm -1}E^{\rm T}D^{\rm {-1/2}}}\|^2_F[/math]


We want to minimize [math]J_1\left({\mathbf W},{\mathbf E}\right)[/math], the difference between [math]{{\mathbf Y}}_{{\mathbf {opt}}}[/math] and [math]{{\mathbf Y}}_{{\mathbf {part}}}[/math].


After Expansion, we have [math]J_1\left({\mathbf W},{\mathbf E}\right)=K-tr\left({{{\mathbf E}}^{{\rm T\ }}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf U}{{\mathbf U}}^{{\rm T}}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf E}\left({{\mathbf E}}^{{\rm T\ }}{\mathbf {DE}}\right)}^{{\mathbf -}{\rm 1}}\right)[/math].
Compared to the normalized cut [math]C\left({\mathbf W},{\mathbf E}\right)=K-tr\left({{\mathbf E}}^{{\rm T}}{\mathbf {WE}}{\left({{\mathbf E}}^{{\rm T}}{\mathbf {DE}}\right)}^{{\mathbf -}{\rm 1}}\right)[/math],
We notice that [math]J_1\left({\mathbf W},{\mathbf E}\right)=C\left({\mathbf W},{\mathbf E}\right)[/math] if [math]{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf U}{{\mathbf U}}^{{\rm T}}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}={\mathbf W}[/math], that is, [math]{\mathbf U}{{\mathbf U}}^{{\rm T}}={{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}{\mathbf \ }{\mathbf W}{{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}=\tilde{{\mathbf W}}[/math]


In the paper, it says if [math]{\mathbf W}[/math] has rank equal to [math]K[/math], then [math]J_1\left({\mathbf W},{\mathbf E}\right)[/math] is exactly the normalized cut [math]C\left({\mathbf W},{\mathbf E}\right)[/math]. Here I doubt its assertion. Instead, it can be asserted that if [math]\tilde{{\mathbf W}}[/math] can be decomposed into [math]{\mathbf U}{{\mathbf U}}^{{\rm T}}[/math], where [math]{\mathbf U}\in {{\mathbb R}}^{P\times K}[/math] is any orthonormal basis of the [math]K[/math]-th principal subspace of [math]\tilde{{\mathbf W}}[/math], then [math]{\mathbf W}[/math] has rank equal to [math]K[/math] and [math]J_1\left({\mathbf W},{\mathbf E}\right)[/math] is exactly the normalized cut [math]C\left({\mathbf W},{\mathbf E}\right)[/math].


In general, [math]{\mathbf U}{{\mathbf U}}^{{\rm T}}[/math] could be a bad approximation of [math]\tilde{{\mathbf W}}[/math], and hence [math]J_1\left({\mathbf W},{\mathbf E}\right)[/math] could be a bad approximation of [math]C\left({\mathbf W},{\mathbf E}\right)[/math]. That is, the rounding error could be not close to the normalized cut. So what justify this rounding scheme? The optimal partition [math]\mathbf E[/math] for [math]J_1(\mathbf{W,E})[/math] is not a close optimal solution for minimizing normalized cut. It is interesting to know its relationship with normalized cut, e.g. are they having close performance in clustering, however the paper does not mention it.

Theorem 3

[math]J_1(\mathbf{W,E})=\sum^K_{k=1}\sum_{\mathbf {x}_j \in C_k}\|\mathbf{u}_j-d^{\rm {1/2}}_j \boldsymbol{\mu}_k\|^2[/math]
Where [math]{{\mathbf u}}_j[/math] is the [math]j[/math]-th row of [math]{\mathbf U}[/math] and hence a low dimension representation of data point [math]{{\mathbf x}}_j[/math], [math]d_j={\mathbf \ }{{\mathbf D}}_{jj}[/math] and [math]{{\boldsymbol{\mu} }}_k{\mathbf =}{\sum_{{{\mathbf x}}_j\in C_k}{d^{{1}/{2}}_j}{{\mathbf u}}_j}/{\sum_{{{\mathbf x}}_j\in C_k}{d_j}}[/math]


Compared to the objective function of K-means clustering
[math]J=\sum^K_{k=1}{\sum_{{\mathbf x}\in C_k}{{\|{\mathbf x}{\mathbf -}{{\boldsymbol{\mu} }}_k\|}^2}}[/math].
It is quite similar, except that now [math]{{\mathbf u}}_j[/math] the low dimension representation of data point [math]{{\mathbf x}}_j[/math] is the input and the mean of the cluster [math]{{\boldsymbol{\mu} }}_k[/math] is a weighted one.

Spectral Clustering using weighted K-means

So a spectral clustering algorithm that minimizes [math]J_1\left({\mathbf W},{\mathbf E}\right)[/math] with respect to [math]{\mathbf E}[/math] with weighted K-means is proposed. Using K-means will find a local minima solution for [math]J_1\left({\mathbf W},{\mathbf E}\right)[/math].
Input: Similarity matrix [math]{\mathbf W}\in {{\mathbb R}}^{P\times P}[/math]
Algorithm:
1. Compute first [math]K[/math] eigenvectors [math]{\mathbf U}\in {{\mathbb R}}^{P\times K}[/math] of [math]{{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}{\mathbf W}{{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}[/math] where [math]{\mathbf D}{\mathbf =}{\rm Diag(}{\mathbf W}\cdot {\mathbf 1}{\rm )}[/math].
2. Let [math]{{\mathbf u}}_i[/math] be the [math]i[/math]-th row of [math]{\mathbf U}[/math] and [math]d_i={\mathbf \ }{{\mathbf D}}_{ii}[/math].
3. Initialize partition [math]C[/math].
4. Weighted K-means: While partition [math]C[/math] is not stationary,
a. For [math]k=1,\dots ,K[/math], [math]{{\boldsymbol{ \mu} }}_k{\mathbf =}{\sum_{{{\mathbf x}}_j\in C_k}{d^{{1}/{2}}_j}{{\mathbf u}}_j}/{\sum_{{{\mathbf x}}_j\in C_k}{d_j}}[/math]
b. For [math]i=1,\dots ,P[/math], assign [math]{{\mathbf x}}_i[/math] to [math]C_k[/math] where [math]k={\arg {\mathop{\min }_{k^'} \|{{\mathbf u}}_j{\mathbf -}d^{{1}/{2}}_j{{\boldsymbol{ \mu} }}_{k^'}\|\ } }[/math]
Output: partition [math]C[/math], distortion measure [math]\sum^K_{k=1}\sum_{\mathbf {x}_j \in C_k}\|\mathbf{u}_j-d^{\rm {1/2}}_j \boldsymbol{\mu}_k\|^2[/math].

Relationship among minimizing normalized cut, minimizing [math]J_1\left({\mathbf W},{\mathbf E}\right)[/math] and Spectral clustering using weighted K-means is shown below. Illustration Of J(W,E).png

Alternative cost function [math]J_2\left({\mathbf W},{\mathbf E}\right)[/math]

Remember that the cost function [math]J_1\left({\mathbf W},{\mathbf E}\right){\mathbf =}\frac{{\rm 1}}{{\rm 2}}{\|{\mathbf U}{{\mathbf U}}^{{\rm T}}{\rm -}{{\mathbf Y}}_{{\mathbf {part}}}{{\mathbf Y}}^{{\rm T}}_{{\mathbf {part}}}\|}^{{\rm 2}}_F[/math]
Let [math]{\mathbf V}[/math] come from multiplying [math]{\mathbf U}[/math] by [math]{{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}[/math] and re-orthogonalizing, that is, [math]{\mathbf V}{\mathbf =}{{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}{\mathbf U}{\left({{\mathbf U}}^{{\rm T}}{{\mathbf D}}^{{\mathbf -}1}{\mathbf U}\right)}^{{{\rm -}{\rm 1}}/{{\rm 2}}}[/math], and [math]{{\mathbf Y}{\mathbf '}}_{{\mathbf {part}}}[/math] comes from multiplying [math]{{\mathbf Y}}_{{\mathbf {part}}}[/math] by [math]{{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}[/math] and re-orthogonalizing, that is, [math]{{\mathbf Y}{\mathbf '}}_{{\mathbf {part}}}{\rm =}{\mathbf E}{\left({{\mathbf E}}^{{\rm T\ }}{\mathbf E}\right)}^{{\mathbf -}{\rm 1}}{{\mathbf E}}^{{\rm T}}[/math]
Replacing [math]{\mathbf U}[/math] and [math]{{\mathbf Y}}_{{\mathbf {part}}}[/math] by [math]{\mathbf V}[/math] and [math]{{\mathbf Y}{\mathbf '}}_{{\mathbf {part}}}[/math] respectively in [math]J_1\left({\mathbf W},{\mathbf E}\right)[/math], we obtain a new cost function
[math]J_2\left({\mathbf W},{\mathbf E}\right){\mathbf =}\frac{{\rm 1}}{{\rm 2}}{\|{\mathbf V}{{\mathbf V}}^{{\rm T}}{\rm -}{\mathbf E}{\left({{\mathbf E}}^{{\rm T\ }}{\mathbf E}\right)}^{{\mathbf -}{\rm 1}}{{\mathbf E}}^{{\rm T}}\|}^{{\rm 2}}_F[/math]


In the paper, two versions of [math]{\mathbf V}[/math]is used, which I think is not correct. One is [math]{\mathbf V}{\mathbf =}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf U}{\left({{\mathbf U}}^{{\rm T}}{{\mathbf D}}^{{\mathbf -}1}{\mathbf U}\right)}^{{{\rm -}{\rm 1}}/{{\rm 2}}}[/math] and the other is [math]{\mathbf V}{\mathbf =}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf U}{\left({{\mathbf U}}^{{\rm T}}{\mathbf DU}\right)}^{{{\rm -}{\rm 1}}/{{\rm 2}}}[/math]. For [math]J_2\left({\mathbf W},{\mathbf E}\right)[/math], I think it is not good formulated compared to [math]J_1\left({\mathbf W},{\mathbf E}\right)[/math]. I can't figure out which relaxation problem it comes from and what is the relationship between [math]J_2\left({\mathbf W},{\mathbf E}\right)[/math] and the normalized cut.

Theorem 4

[math]J_2\left({\mathbf W},{\mathbf E}\right)=\sum^K_{k=1}{\sum_{{{\mathbf x}}_j\in C_k}{{\|{{\mathbf v}}_j{\mathbf -}{{\boldsymbol{\mu} }}_k\|}^2}}[/math]
Where [math]{{\mathbf v}}_j[/math] is the [math]j[/math]-th row of [math]{\mathbf V}[/math] and hence a low dimension representation of data point [math]{{\mathbf x}}_j[/math] and [math]{{\boldsymbol{\mu} }}_k{\mathbf =}\frac{{\rm 1}}{\left|C_k\right|}\sum_{{{\mathbf x}}_j\in C_k}{{{\mathbf v}}_j}[/math]


It is the same with the objective function of K-means expect that [math]{{\mathbf v}}_j[/math] the low dimension representation of data point [math]{{\mathbf x}}_j[/math]is the input.

Spectral clustering using K-means algorithm

Input: Similarity matrix [math]{\mathbf W}\in {{\mathbb R}}^{P\times P}[/math].
Algorithm:
1. Compute first [math]K[/math] eigenvectors [math]{\mathbf U}\in {{\mathbb R}}^{P\times K}[/math] of [math]{{\mathbf D}}^{{\rm -}{{\rm 1}}/{{\rm 2}}}{\mathbf W}{{\mathbf D}}^{{\rm -}{{\rm 1}}/{{\rm 2}}}[/math] where [math]{\mathbf D}{\rm =Diag(}{\mathbf W}\cdot {\mathbf 1}{\rm )}[/math].
2. Let [math]{\mathbf V}{\rm =}{{\mathbf D}}^{{{\rm -1}}/{{\rm 2}}}{\mathbf U}{\left({{\mathbf U}}^{{\rm T}}{{\mathbf D}}^{{\rm -}{\rm 1}}{\mathbf U}\right)}^{{{\rm -}{\rm 1}}/{{\rm 2}}}[/math]
3. Let [math]{{\mathbf v}}_i[/math] be the [math]i[/math]-th row of [math]{\mathbf V}[/math].
4. Initialize partition [math]C[/math].
5. K-means: While partition [math]C[/math] is not stationary,
a. For [math]{\rm k=1,}\dots {\rm ,}K[/math], [math]{{\boldsymbol{\mu} }}_k{\mathbf =}\frac{{\rm 1}}{\left|C_k\right|}\sum_{{{\mathbf x}}_j\in C_k}{{{\mathbf v}}_j}[/math]
b. For [math]{\rm i=1,}\dots {\rm ,}P[/math], assign [math]{{\mathbf x}}_i[/math] to [math]C_k[/math] where [math]k{\rm =}{\arg {\mathop{\min }_{k^{{\rm '}}} \|{{\mathbf v}}_j{\rm -}{{\boldsymbol{\mu} }}_{k^{{\rm '}}}\|\ }\ }[/math]
Output: partition [math]C[/math], distortion measure [math]\sum^K_{k=1}{\sum_{{{\mathbf x}}_j\in C_k}{{\|{{\mathbf v}}_j{\mathbf -}{{\boldsymbol{\mu} }}_k\|}^2}}[/math].

Learning the Similarity Matrix

In clustering, the similarity matrix [math]{\mathbf W}[/math] is given, and the goal is to find a partition [math]{\mathbf E}[/math] minimizing the objective function. In this section, the partition [math]{\mathbf E}[/math] is assumed to be given and our goal is to learn the similarity matrix [math]{\mathbf W}[/math].

Trivial solution

If no constraint is put on [math]{\mathbf W}[/math], given a dataset with partition [math]{\mathbf E}[/math], the trivial solution would be any matrix that is block-constant with the appropriate blocks, that is, the matrix that sets the similarity between points in the same cluster large and the similarity between points in different clusters 0.

The meaningful formulation

We have a parametric form for [math]{\mathbf W}[/math], which is a function of a vector variable [math]{\boldsymbol{\alpha} }\in {{\mathbb R}}^F[/math], denote it as [math]{\mathbf W}\left({\boldsymbol{\alpha} }\right)[/math]. Given datasets with known partition [math]{\mathbf E}[/math], we would like to learn [math]{\boldsymbol{\alpha} }[/math] such that the distance between the true partition and the partition we obtained from clustering algorithm using [math]{\mathbf W}\left({\boldsymbol{\alpha} }\right)[/math] is minimized and hopefully it generalizes to the unseen dataset with similar structure. In the paper, diagonally-scaled Gaussian kernel matrices are considered.
[math]\mathbf W(\boldsymbol{\alpha})(i,j)= \rm exp (-(\mathbf{x}_i-\mathbf{x}_j)^{\rm T}Diag(\boldsymbol{\alpha})(\mathbf{x}_i-\mathbf{x}_j) ) [/math]
Moreover, the elements in [math]{\boldsymbol{\alpha} }[/math] are in one-to-one correspondence with the features; setting one of these parameters to zero is equivalent to ignoring the corresponding feature. Labeled data sets are available for the speech separation and image segmentation.

Relationship with distance metric learning

This formulation is similar to the distance metric learning we learned in class. Although here it does not emphasize learning a metric such that similar points are close and dissimilar points are far apart. It is interesting to know if we use the distance metric learning to learn [math]{\boldsymbol{\alpha} }[/math], and apply it to obtain the similarity matrix for unseen data and do clustering, what is the performance? Would there be overfitting on the data?

Distance between partitions

Let [math]{\mathbf E}{\mathbf =}({{\mathbf e}}_1,\dots ,{{\mathbf e}}_R)[/math] and [math]{\mathbf F}{\mathbf =}({{\mathbf f}}_1,\dots ,{{\mathbf f}}_S)[/math] be two partitions of [math]P[/math] data points, the distance between these partitions is defined as
[math]D\left({\mathbf E},{\mathbf F}\right)=\frac{1}{\sqrt{2}}{\|{{\mathbf E}\left({{\mathbf E}}^{{\rm T\ }}{\mathbf E}\right)}^{{\mathbf -}{\rm 1}}{{\mathbf E}}^{{\rm T\ }}-{{\mathbf F}\left({{\mathbf F}}^{{\rm T\ }}{\mathbf F}\right)}^{{\mathbf -}{\rm 1}}{{\mathbf F}}^{{\rm T\ }}\|}_F[/math]

Naive approach

Minimizing the distance between true partition [math]{\mathbf E}[/math] and the partition [math]{\mathbf E}\left({\mathbf W}\right)[/math] we obtain from spectral clustering. However, it is hard to optimize since K-means algorithm is a non-continuous map and makes the cost function non-continuous.


Theorem 5

Let [math]\eta ={{\mathop{\max }_{{{\mathbf x}}_i} {{\mathbf D}}_{ii}\ }}/{{\mathop{\min }_{{{\mathbf x}}_i} {{\mathbf D}}_{ii}\ }}\ge 1[/math], [math]{{\mathbf E}}_{{\mathbf 1}}\left({\mathbf W}\right){\mathbf =}{\arg {\mathop{\min }_{{\mathbf E}} J_1\left({\mathbf W},{\mathbf E}\right)\ }\ }[/math] and [math]{{\mathbf E}}_{{\mathbf 2}}\left({\mathbf W}\right){\mathbf =}{\arg {\mathop{\min }_{{\mathbf E}} J_2\left({\mathbf W},{\mathbf E}\right)\ }\ }[/math], we have


[math]{D\left({\mathbf E},{{\mathbf E}}_{{\mathbf 1}}\left({\mathbf W}\right)\right)}^2\le 4\eta J_1\left({\mathbf W},{\mathbf E}\right)[/math]


and


[math]{D\left({\mathbf E},{{\mathbf E}}_{{\mathbf 2}}\left({\mathbf W}\right)\right)}^2\le 4J_2\left({\mathbf W},{\mathbf E}\right)[/math]


This theorem shows that instead of minimizing [math]D\left({\mathbf E},{\mathbf E}\left({\mathbf W}\right)\right)[/math], we can minimize its upper bound [math]J\left({\mathbf W},{\mathbf E}\right)[/math], if we can obtain [math]{\mathbf W}[/math] which produces small [math]J\left({\mathbf W},{\mathbf E}\right)[/math], [math]{\mathbf E}\left({\mathbf W}\right)[/math] will be close to [math]{\mathbf E}[/math]

Approximation of eigensubspace

Given a matrix [math]{\mathbf M}\in {{\mathbb R}}^{P\times P}[/math], let [math]{\mathbf U}\in {{\mathbb R}}^{P\times K}[/math] denote any orthonormal basis of the [math]K[/math]-th principal subspace of [math]{\mathbf M}[/math]. Eigensubspace of [math]{\mathbf M}[/math] can be approximated as [math]{{\mathbf M}}^q{\mathbf V}[/math] where [math]{\mathbf V}\in {{\mathbb R}}^{P\times K}[/math], [math]q[/math] is the power we used. [math]{\mathbf U}[/math] can be approximated as [math]{{\mathbf M}}^q{\mathbf V}{\left({\left({{\mathbf M}}^q{\mathbf V}\right)}^{{\rm T}}{{\mathbf M}}^q{\mathbf V}\right)}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}[/math] by orthonormalizing. The approximation of orthogonal projection [math]{\mathbf U}{{\mathbf U}}^{{\rm T}}[/math] is [math]{{\mathbf M}}^q{\mathbf V}{\left({\left({{\mathbf M}}^q{\mathbf V}\right)}^{{\rm T}}{{\mathbf M}}^q{\mathbf V}\right)}^{{\mathbf -}{\rm 1}}{{\mathbf V}}^{{\rm T}}{{\mathbf M}}^q[/math]

Derivative of orthogonal projection [math]{\mathbf \Pi }={\mathbf U}{{\mathbf U}}^{{\rm T}}[/math]

[math]d{\mathbf \Pi }{\mathbf =}{\mathbf U}{{\mathbf N}}^{{\rm T}}{\rm +}{\mathbf N}{{\mathbf U}}^{{\rm T}}[/math] can be obtained by solving the following linear system:
[math]{\mathbf {MN}}{\mathbf -}{\mathbf N}{{\mathbf U}}^{{\rm T}}{\mathbf {MU}}{\mathbf =-}\left({\mathbf I}{\mathbf -}{\mathbf U}{{\mathbf U}}^{{\rm T}}\right)d{\mathbf {MU}}[/math] and [math]{{\mathbf U}}^{{\rm T}}{\mathbf N}{\mathbf =}{\mathbf 0}[/math]

Approximation of the cost function [math]J\left({\mathbf W},{\mathbf E}\right)[/math]

Let [math]{\mathbf F}[/math] be a random partition of the dataset, [math]{\mathbf V}\in {{\mathbb R}}^{{ P}\times { K}}[/math] be [math]{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf F}[/math], the approximated orthonormal basis of the [math]{K}[/math]-th principal subspace of [math]\tilde{{\mathbf W}}[/math] is [math]{{\mathbf U}{\mathbf '=}\tilde{{\mathbf W}}}^q{\mathbf V}{\left({\left({\tilde{{\mathbf W}}}^q{\mathbf V}\right)}^{{\rm T}}{\tilde{{\mathbf W}}}^q{\mathbf V}\right)}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}[/math]


The approximated function for [math]J_1\left({\mathbf W},{\mathbf E}\right)[/math] is
[math]F_1\left({\mathbf W},{\mathbf E}\right){\mathbf =}\frac{{\rm 1}}{{\rm 2}}{\|{\mathbf U}{\mathbf '}{{\mathbf U}{\mathbf '}}^{{\rm T}}{\rm -}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf E}{\left({{\mathbf E}}^{{\rm T\ }}{\mathbf {DE}}\right)}^{{\mathbf -}{\rm 1}}{{\mathbf E}}^{{\rm T}}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}\|}^{{\rm 2}}_F[/math]
and
The approximated function for [math]J_2\left({\mathbf W},{\mathbf E}\right)[/math] is
[math]F_2\left({\mathbf W},{\mathbf E}\right)=\frac{{\rm 1}}{{\rm 2}}{\|{\mathbf V}{\mathbf '}{{\mathbf V}{\mathbf '}}^{{\rm T}}{\rm -}{\mathbf E}{\left({{\mathbf E}}^{{\rm T\ }}{\mathbf E}\right)}^{{\mathbf -}{\rm 1}}{{\mathbf E}}^{{\rm T}}\|}^{{\rm 2}}_F[/math]
Where [math]{{\mathbf V}}^{{\mathbf '}}{\mathbf =}{{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}{\mathbf U}{\mathbf '}{\left({{\mathbf U}{\mathbf '}}^{{\rm T}}{{\mathbf D}}^{{\mathbf -}{\mathbf 1}}{\mathbf U}{\mathbf '}\right)}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}[/math]

Learning Algorithm

Given [math]N[/math] datasets [math]X_n[/math] [math]n=1,2,\dots ,N[/math], with known partition [math]{{\mathbf E}}_n[/math], the similarity matrix for each dataset is denoted as [math]{{\mathbf W}}_n\left({\boldsymbol{\alpha} }\right)[/math]. The cost function used for learning [math]{\boldsymbol{\alpha} }[/math]is
[math]H\left({\boldsymbol{\alpha} }\right)=\frac{1}{N}\sum^N_{n=1}{F\left({{\mathbf W}}_n\left({\boldsymbol{\alpha} }\right),{{\mathbf E}}_n\right)}+\gamma {\|{\boldsymbol{\alpha}}\|}_1[/math]
Where [math]\gamma [/math] is some constant, and the [math]L_1[/math] norm of [math]{\boldsymbol{\alpha} }[/math] serves as a feature selection term, tending to make the solution sparse. The learning algorithm is to minimize [math]H\left({\boldsymbol{\alpha} }\right)[/math] with respect to [math]{\boldsymbol{\alpha} }[/math], using the method of gradient descent.

Experiment

After learning [math]{\boldsymbol{\alpha} }[/math] from the training data set, spectral clustering algorithm is applied to the testing dataset with the similarity matrix generated by [math]{\boldsymbol{\alpha} }[/math]. The figures are from the paper, obviously the authors have some error in the description. The top row and the bottom row of the figure 8 should be switched.

Segmentation results after learning the parameterized similarity matrices

The speech signal is mixed from two English spearkers. After learning the parameters, spetral clustering is able to sperate the mixed signals.

Speech Segmentation results after learning

Appendix

Proof for Theorem 1

Substitute [math]{\mathbf Y}[/math] in (1b) using (1a), we have
[math]{{{\mathbf \Lambda }}^{{\rm T}}{{\mathbf E}}^{{\rm T\ }}{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf E}{\mathbf \Lambda }{\mathbf =}{\mathbf I}[/math]
so
[math]{{{{\mathbf \Lambda }}^{{\rm T}}\left({{\mathbf E}}^{{\rm T\ }}{\mathbf {DE}}\right)}^{{{\rm 1}}/{{\rm 2}}}{\mathbf \ }\left({{\mathbf E}}^{{\rm T\ }}{\mathbf {DE}}\right)}^{{{\rm 1}}/{{\rm 2}}}{\mathbf \Lambda }{\mathbf =}{\left({{\mathbf E}}^{{\rm T\ }}{\mathbf {DE}}\right)}^{{{\rm 1}}/{{\rm 2}}}{\mathbf \Lambda }{{{\mathbf \Lambda }}^{{\rm T}}\left({{\mathbf E}}^{{\rm T\ }}{\mathbf {DE}}\right)}^{{{\rm 1}}/{{\rm 2}}}{\mathbf =}{\mathbf I}[/math]
So we finally have [math]{\mathbf \Lambda }{{\mathbf \Lambda }}^{{\rm T}}{\mathbf =}{\left({{\mathbf E}}^{{\rm T\ }}{\mathbf {DE}}\right)}^{{\mathbf -}{\rm 1}}[/math]
For all [math]{\mathbf Y}[/math] satisfies the constraints, we can find corresponding [math]{\mathbf E}[/math] and [math]{\mathbf \Lambda }[/math]such that
[math]K-tr\left({{\mathbf Y}}^{{\rm T}}\left({{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}{\mathbf W}{{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}\right){\mathbf Y}\right)[/math]
[math]=K-tr\left({{{{{{\mathbf \Lambda }}^{{\rm T}}{\mathbf E}}^{{\rm T\ }}{\mathbf D}^{{{\rm 1}}/{{\rm 2}}}}{\mathbf D}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}}{\mathbf W}{{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}{\mathbf D}^{\rm{1/2}}}{\mathbf E}{\mathbf \Lambda }\right)[/math]
[math]=K-tr\left({{\mathbf E}}^{{\rm T\ }}{\mathbf {WE}}{\mathbf \Lambda }{{\mathbf \Lambda }}^{{\rm T}}\right)[/math]
[math]=K-tr\left({{\mathbf E}}^{{\rm T\ }}{\mathbf {WE}}{\left({{\mathbf E}}^{{\rm T\ }}{\mathbf {DE}}\right)}^{{\mathbf -}{\rm 1}}\right)[/math]
[math]=C\left({\mathbf W},{\mathbf E}\right)[/math]
[math]=Ncut\left(C\right)[/math]

[math]J_1\left({\mathbf W},{\mathbf E}\right)[/math] Expansion


[math]J_1\left({\mathbf W},{\mathbf E}\right){\mathbf =}\frac{{\rm 1}}{{\rm 2}}{\|{{\mathbf Y}}_{{\mathbf {opt}}}{{\mathbf Y}}^{{\rm T}}_{{\mathbf {opt}}}{\rm -}{{\mathbf Y}}_{{\mathbf {part}}}{{\mathbf Y}}^{{\rm T}}_{{\mathbf {part}}}\|}^{{\rm 2}}_F[/math]
[math]=\frac{{\rm 1}}{{\rm 2}}tr\left({\mathbf U}{{\mathbf U}}^{{\rm T}}{\rm +}{{\mathbf Y}}_{{\mathbf {part}}}{{\mathbf Y}}^{{\rm T}}_{{\mathbf {part}}}{\rm -}{\rm 2}{\mathbf U}{{\mathbf U}}^{{\rm T}}{{\mathbf Y}}_{{\mathbf {part}}}{{\mathbf Y}}^{{\rm T}}_{{\mathbf {part}}}\right)[/math]
[math]=\frac{{\rm 1}}{{\rm 2}}K+\frac{{\rm 1}}{{\rm 2}}K-tr\left({\mathbf U}{{\mathbf U}}^{{\rm T}}{{\mathbf Y}}_{{\mathbf {part}}}{{\mathbf Y}}^{{\rm T}}_{{\mathbf {part}}}\right)[/math]
[math]=K-tr\left({{{\mathbf E}}^{{\rm T\ }}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf U}{{\mathbf U}}^{{\rm T}}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf E}\left({{\mathbf E}}^{{\rm T\ }}{\mathbf {DE}}\right)}^{{\mathbf -}{\rm 1}}\right)[/math]

Proof for Theorem 3

[math]\sum^K_{k=1}{\sum_{{{\mathbf x}}_j\in C_k}{{\|{{\mathbf u}}_j{\mathbf -}{d^{{1}/{2}}_j{\boldsymbol{\mu} }}_k\|}^2}}[/math]
[math]=\sum^K_{k=1}{\sum_{{{\mathbf x}}_j\in C_k}{{\left({{\mathbf u}}_j{\mathbf -}{d^{{1}/{2}}_j{\boldsymbol{\mu} }}_k\right)}^{{\rm T}}\left({{\mathbf u}}_j{\mathbf -}{d^{{1}/{2}}_j{\boldsymbol{\mu} }}_k\right)}}[/math]
[math]=\sum^K_{k=1}{\sum_{{{\mathbf x}}_j\in C_k}{{{\mathbf u}}^{{\rm T}}_j{{\mathbf u}}_j}}-2\sum^K_{k=1}{\sum_{{{\mathbf x}}_j\in C_k}{{d^{{1}/{2}}_j{\mathbf u}}^{{\rm T}}_j}}{{\boldsymbol{\mu} }}_k{\mathbf +}\sum^K_{k=1}{\sum_{{{\mathbf x}}_j\in C_k}{d_j{{\boldsymbol{\mu} }}^{{\rm T}}_k{{\boldsymbol{\mu} }}_k}}[/math]
[math]=\sum^P_{j=1}{{{\mathbf u}}^{{\rm T}}_j{{\mathbf u}}_j}-\sum^K_{k=1}{\sum_{{{\mathbf x}}_j\in C_k}{{d^{{1}/{2}}_j{\mathbf u}}^{{\rm T}}_j}}\left(\sum_{{{\mathbf x}}_i\in C_k}{d^{{1}/{2}}_i}{{\mathbf u}}_i\right)\frac{1}{\sum_{{{\mathbf x}}_j\in C_k}{d_j}}[/math]
[math]=\sum^P_{j=1}{\sum^K_{i=1}{{{ u}}_{ji}{{ u}}_{ji}}}-\sum^K_{k=1}{\frac{1}{\sum_{{{\mathbf x}}_j\in C_k}{d_j}}\sum_{{{\mathbf x}}_i,{{\mathbf x}}_j\in C_k}{{d^{{1}/{2}}_id^{{1}/{2}}_j{\mathbf u}}^{{\rm T}}_i}{{\mathbf u}}_j}[/math]
[math]=K-\sum^K_{k=1}{\frac{1}{{{\mathbf e}}^{{\rm T}}_k{\mathbf D}{{\mathbf e}}_k}{{{\mathbf e}}^{{\rm T}}_k{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf U}{{\mathbf U}}^{{\rm T}}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{{\mathbf e}}_k}[/math]
[math]=K-tr\left({{{\mathbf E}}^{{\rm T\ }}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf U}{{\mathbf U}}^{{\rm T}}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf E}\left({{\mathbf E}}^{{\rm T\ }}{\mathbf {DE}}\right)}^{{\mathbf -}{\rm 1}}\right)[/math]
[math]=J_1\left({\mathbf W},{\mathbf E}\right)[/math]

Proof for Theorem 5

Remember [math]J_2\left({\mathbf W},{\mathbf E}\right){\mathbf =}\frac{{\rm 1}}{{\rm 2}}{\|{\mathbf V}{{\mathbf V}}^{{\rm T}}{\rm -}{\mathbf E}{\left({{\mathbf E}}^{{\rm T\ }}{\mathbf E}\right)}^{{\mathbf -}{\rm 1}}{{\mathbf E}}^{{\rm T}}\|}^{{\rm 2}}_F[/math]

Using the triangle inequality,

[math]D\left({\mathbf E},{{\mathbf E}}_{{\mathbf 2}}\left({\mathbf W}\right)\right)=\frac{1}{\sqrt{2}}{\|{{\mathbf E}\left({{\mathbf E}}^{{\rm T\ }}{\mathbf E}\right)}^{{\mathbf -}{\rm 1}}{{\mathbf E}}^{{\rm T\ }}-{{{\mathbf E}}_{{\mathbf 2}}\left({\mathbf W}\right)\left({{{\mathbf E}}_{{\mathbf 2}}\left({\mathbf W}\right)}^{{\rm T\ }}{{\mathbf E}}_{{\mathbf 2}}\left({\mathbf W}\right)\right)}^{{\mathbf -}{\rm 1}}{{{\mathbf E}}_{{\mathbf 2}}\left({\mathbf W}\right)}^{{\rm T\ }}\|}_F[/math]

[math]\le \frac{1}{\sqrt{2}}{\|{{\mathbf E}\left({{\mathbf E}}^{{\rm T\ }}{\mathbf E}\right)}^{{\mathbf -}{\rm 1}}{{\mathbf E}}^{{\rm T\ }}-{\mathbf V}{{\mathbf V}}^{{\rm T}}\|}_F+\frac{1}{\sqrt{2}}{\|{\mathbf V}{{\mathbf V}}^{{\rm T}}{\rm -}{{{\mathbf E}}_{{\mathbf 2}}\left({\mathbf W}\right)\left({{{\mathbf E}}_{{\mathbf 2}}\left({\mathbf W}\right)}^{{\rm T\ }}{{\mathbf E}}_{{\mathbf 2}}\left({\mathbf W}\right)\right)}^{{\mathbf -}{\rm 1}}{{{\mathbf E}}_{{\mathbf 2}}\left({\mathbf W}\right)}^{{\rm T\ }}\|}_F[/math]

[math]={J_2\left({\mathbf W},{\mathbf E}\right)}^{{1}/{2}}+{\mathop{\min }_{{\mathbf E}{\mathbf '}} {J_2\left({\mathbf W},{\mathbf E}{\mathbf '}\right)}^{{1}/{2}}\ }[/math]

[math]\le 2{J_2\left({\mathbf W},{\mathbf E}\right)}^{{1}/{2}}[/math]

So, [math]{D\left({\mathbf E},{{\mathbf E}}_{{\mathbf 2}}\left({\mathbf W}\right)\right)}^2\le 4J_2\left({\mathbf W},{\mathbf E}\right)[/math]

Similarly, we can have [math]{D\left({\mathbf E},{{\mathbf E}}_{{\mathbf 1}}\left({\mathbf W}\right)\right)}^2\le 4\eta J_1\left({\mathbf W},{\mathbf E}\right)[/math]

Reference

<references/>