learning Spectral Clustering, With Application To Speech Separation: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 47: Line 47:
Since minimizing normalized cut is NP-hard problem, its equivalent optimization problem is NP-hard too. However, by removing the constraint (1a) in '''Theorem 1''', a relaxed problem is obtained.
Since minimizing normalized cut is NP-hard problem, its equivalent optimization problem is NP-hard too. However, by removing the constraint (1a) in '''Theorem 1''', a relaxed problem is obtained.


<br><math>\mathop{\min_{\mathbf Y}}K-tr(\mathbf{Y^{\rm T}(D^{\rm{1/2}}WD^{\rm{1/2}})Y})</math>
<br><math>\mathop{\min_{\mathbf Y}}K-tr(\mathbf{Y^{\rm T}(D^{\rm{-1/2}}WD^{\rm{-1/2}})Y})</math>
<br>subject to
<br>subject to
<br><math>{{\mathbf Y}}^{{\rm T}}{\mathbf Y}{\mathbf =}{\mathbf I}</math>  
<br><math>{{\mathbf Y}}^{{\rm T}}{\mathbf Y}{\mathbf =}{\mathbf I}</math>  
<br>Where <math>{\mathbf Y}\in {{\mathbb R}}^{P\times K}</math>
<br>Where <math>{\mathbf Y}\in {{\mathbb R}}^{P\times K}</math>
==Theorem 2==
The maximum of <math>tr\left({{\mathbf Y}}^{{\rm T}}\left({{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}{\mathbf W}{{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}\right){\mathbf Y}\right)</math> over matrices <math>{\mathbf Y}\in {{\mathbb R}}^{P\times K}</math> such that <math>{{\mathbf Y}}^{{\rm T}}{\mathbf Y}{\mathbf =}{\mathbf I}</math> is the sum of the <math>K</math> largest eigenvalues of <math>\tilde{{\mathbf W}}={{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}{\mathbf W}{{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}</math>. It is attained at all <math>{\mathbf Y}</math> of the form <math>{\mathbf Y}{\rm =}{\mathbf U}{{\mathbf B}}_{{\rm 1}}</math> where <math>{\mathbf U}\in {{\mathbb R}}^{P\times K}</math> is any orthonormal basis of the <math>K</math>-th principal subspace of <math>{{\tilde{\mathbf {W}}}}</math> and <math>{{\mathbf B}}_{{\rm 1}}</math> is an arbitrary orthogonal matrix in <math>{{\mathbb R}}^{K\times K}</math>.
<br>Since <math>{{\mathbf B}}_{{\mathbf 1}}</math> is an arbitrary orthogonal matrix, let it be an identity matrix, so we have <math>{{\mathbf Y}}_{{\mathbf opt}}{\rm =}{\mathbf U}</math>
<br>The optimal solution<math>{\mathbf \ }{{\mathbf Y}}_{{\mathbf opt}}</math> for the relaxed problem in general does not satisfy the constraint (1a), therefore we would like to find a candidate solution of the original optimization problem which is close to the optimal solution <math>{{\mathbf Y}}_{{\mathbf opt}}</math>. This is called rounding.

Revision as of 23:16, 30 June 2009

Introduction

The paper <ref>Francis R. Bach and Michael I. Jordan, Learning Spectral Clustering,With Application To Speech Separation, Journal of Machine Learning Research 7 (2006) 1963-2001.</ref> presented here is about spectral clustering which makes use of dimension reduction and learning a similarity matrix that generalizes to the unseen datasets when spectral clustering is applied to them.

Clustering

Clustering refers to partition a given dataset into clusters such that data points in the same cluster are similar and data points in different clusters are dissimilar. Similarity is usually measured over distance between data points.

Formally stated, given a set of data points [math]\displaystyle{ X=\{{{\mathbf x}}_1,{{\mathbf x}}_2,\dots ,{{\mathbf x}}_P\} }[/math], we would like to find [math]\displaystyle{ K }[/math] disjoint clusters [math]\displaystyle{ {C{\mathbf =}\{C_k\}}_{k\in \{1,\dots ,K\}} }[/math] such that [math]\displaystyle{ \bigcup{C_k}=X }[/math], that optimizes a certain objective function. The dimensionality of data points is [math]\displaystyle{ D }[/math], and [math]\displaystyle{ X }[/math] can be represented as a matrix [math]\displaystyle{ {{\mathbf X}}_{D\times P} }[/math].The similarity matrix that measures the similarity between each pair of points is denoted by [math]\displaystyle{ {{\mathbf W}}_{P\times P} }[/math]. A classical similarity matrix for clustering is the diagonally-scaled Gaussian similarity, defined as
[math]\displaystyle{ \mathbf W(i,j)= \rm exp (-(\mathbf{x}_i-\mathbf{x}_j)^{\rm T}Diag(\boldsymbol{\alpha})(\mathbf{x}_i-\mathbf{x}_j) ) }[/math]
where [math]\displaystyle{ {\mathbf \boldsymbol{\alpha} }\in {{\mathbb R}}^D }[/math] is a vector of positive parameters, and [math]\displaystyle{ \rm Diag(\boldsymbol{\alpha} ) }[/math] denotes the [math]\displaystyle{ D\times D }[/math] diagonal matrix with diagonal [math]\displaystyle{ {\boldsymbol{\alpha} } }[/math].

Objective functions

Objective function for K-means clustering

Given the number of clusters [math]\displaystyle{ K }[/math], it aims to minimize an objective function (sum of within-cluster distance) over all clustering scheme [math]\displaystyle{ C }[/math].
[math]\displaystyle{ \mathop{\min_C} J=\sum^K_{k=1}\sum_{\mathbf x \in C_k}\|\mathbf x - \boldsymbol{\mu}_k\|^2 }[/math]
[math]\displaystyle{ {\boldsymbol{\mu}}_k{\mathbf =}\frac{{\mathbf 1}}{\left|C_k\right|}\sum_{{\mathbf x}\in C_k}{{\mathbf x}} }[/math] is the mean of the cluster [math]\displaystyle{ C_k }[/math]

Min cut

For two subsets of [math]\displaystyle{ A,B\subset X }[/math], we define
[math]\displaystyle{ cut(A,B)=\sum_{{\mathbf x}_i \in A}\sum_{{\mathbf x}_j \in B}\mathbf W (i,j) }[/math]
Mincut is the sum of inter-cluster weights.
[math]\displaystyle{ Mincut(C)=\sum^K_{k=1} cut(C_k,X \backslash C_k) }[/math]

Normalized cut

The normalized cut in the paper is defined as
[math]\displaystyle{ Ncut(C)=\sum^K_{k=1}\frac{cut(C_k,X\backslash C_k)}{cut(C_k,X)}=\sum^K_{k=1}\frac{cut(C_k,X)-cut(C_k,C_k)}{cut(C_k,X)}=K-\sum^K_{k=1}{\frac{cut(C_k,C_k)}{cut(C_k,X)}} }[/math]
Normalized cut takes a small value if the clusters [math]\displaystyle{ C_k }[/math] are not too small <ref> Ulrike von Luxburg, A Tutorial on Spectral Clustering, Technical Report No. TR-149, Max Planck Institute for Biological Cybernetics.</ref> as measured by the intra-cluster weights. So it tries to achieve balanced clusters. There is unlikely that we will have clusters containing one data point.

The matrix representation of Normalized cut

Let [math]\displaystyle{ {{\mathbf e}}_k\in {\{0,1\}}^P }[/math] be the indicator vector for cluster [math]\displaystyle{ C_k }[/math], where the non-zero elements indicate the data points in cluster [math]\displaystyle{ C_k }[/math]. Therefore, knowing [math]\displaystyle{ {\mathbf E}{\mathbf =}({{\mathbf e}}_1,\dots ,{{\mathbf e}}_K) }[/math] is equivalent to know clustering scheme [math]\displaystyle{ C }[/math]. Further let [math]\displaystyle{ {\mathbf D} }[/math] denotes the diagonal matrix whose [math]\displaystyle{ i }[/math]-th diagonal element is the sum of the elements in the [math]\displaystyle{ i }[/math]-th row of [math]\displaystyle{ {\mathbf W} }[/math], that is, [math]\displaystyle{ {\mathbf D}{\mathbf =}{\rm Diag(}{\mathbf W}\cdot {\mathbf 1}{\rm )} }[/math], where [math]\displaystyle{ {\mathbf 1} }[/math] is defined as the vector in [math]\displaystyle{ {\{1\}}^P }[/math].
So the normalized cut can be written as
[math]\displaystyle{ Ncut(C)=C(\mathbf{W,E})=\sum^K_{k=1}\frac{{\mathbf e}^{\rm T}_k (\mathbf{D-W}){\mathbf e}_k}{{\mathbf e}^{\rm T}_k (\mathbf{D}){\mathbf e}_k}=K-tr(\mathbf {E^{\rm T} W E}(\mathbf {E^{\rm T} D E})^{-1}) }[/math]

Spectral Clustering

Solving the problem of Normalized cut is NP-hard, so we turn to the relaxed version of it.

Theorem 1

Minimizing normalized cut over all [math]\displaystyle{ C }[/math] is equivalent to the following optimization problem (refer as original optimization problem).
[math]\displaystyle{ \mathop{\min_{\mathbf Y}}K-tr(\mathbf{Y^{\rm T}(D^{\rm{1/2}}WD^{\rm{1/2}})Y}) }[/math]
subject to
[math]\displaystyle{ {\mathbf Y}{\mathbf =}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf E}{\mathbf \Lambda } }[/math] (1a)
and
[math]\displaystyle{ {{\mathbf Y}}^{{\rm T}}{\mathbf Y}{\mathbf =}{\mathbf I} }[/math] (1b)
Where [math]\displaystyle{ {\mathbf \Lambda }\in {{\mathbb R}}^{K\times K},{\mathbf Y}\in {{\mathbb R}}^{P\times K} }[/math]
In other words, given [math]\displaystyle{ {\mathbf E} }[/math] and let[math]\displaystyle{ \mathbf{\Lambda =(E^{\rm T} D E)^{\rm{1/2}}} }[/math], we can form a candidate solution [math]\displaystyle{ {\mathbf Y}{\mathbf =}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf E}{\left({{\mathbf E}}^{{\rm T\ }}{\mathbf {D E}}\right)}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}} }[/math] for the above optimization problem.

Relaxed optimization problem

Since minimizing normalized cut is NP-hard problem, its equivalent optimization problem is NP-hard too. However, by removing the constraint (1a) in Theorem 1, a relaxed problem is obtained.


[math]\displaystyle{ \mathop{\min_{\mathbf Y}}K-tr(\mathbf{Y^{\rm T}(D^{\rm{-1/2}}WD^{\rm{-1/2}})Y}) }[/math]
subject to
[math]\displaystyle{ {{\mathbf Y}}^{{\rm T}}{\mathbf Y}{\mathbf =}{\mathbf I} }[/math]
Where [math]\displaystyle{ {\mathbf Y}\in {{\mathbb R}}^{P\times K} }[/math]

Theorem 2

The maximum of [math]\displaystyle{ tr\left({{\mathbf Y}}^{{\rm T}}\left({{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}{\mathbf W}{{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}\right){\mathbf Y}\right) }[/math] over matrices [math]\displaystyle{ {\mathbf Y}\in {{\mathbb R}}^{P\times K} }[/math] such that [math]\displaystyle{ {{\mathbf Y}}^{{\rm T}}{\mathbf Y}{\mathbf =}{\mathbf I} }[/math] is the sum of the [math]\displaystyle{ K }[/math] largest eigenvalues of [math]\displaystyle{ \tilde{{\mathbf W}}={{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}{\mathbf W}{{\mathbf D}}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}} }[/math]. It is attained at all [math]\displaystyle{ {\mathbf Y} }[/math] of the form [math]\displaystyle{ {\mathbf Y}{\rm =}{\mathbf U}{{\mathbf B}}_{{\rm 1}} }[/math] where [math]\displaystyle{ {\mathbf U}\in {{\mathbb R}}^{P\times K} }[/math] is any orthonormal basis of the [math]\displaystyle{ K }[/math]-th principal subspace of [math]\displaystyle{ {{\tilde{\mathbf {W}}}} }[/math] and [math]\displaystyle{ {{\mathbf B}}_{{\rm 1}} }[/math] is an arbitrary orthogonal matrix in [math]\displaystyle{ {{\mathbb R}}^{K\times K} }[/math].


Since [math]\displaystyle{ {{\mathbf B}}_{{\mathbf 1}} }[/math] is an arbitrary orthogonal matrix, let it be an identity matrix, so we have [math]\displaystyle{ {{\mathbf Y}}_{{\mathbf opt}}{\rm =}{\mathbf U} }[/math]
The optimal solution[math]\displaystyle{ {\mathbf \ }{{\mathbf Y}}_{{\mathbf opt}} }[/math] for the relaxed problem in general does not satisfy the constraint (1a), therefore we would like to find a candidate solution of the original optimization problem which is close to the optimal solution [math]\displaystyle{ {{\mathbf Y}}_{{\mathbf opt}} }[/math]. This is called rounding.