Difference between revisions of "learning a Nonlinear Embedding by Preserving Class Neighborhood Structure"

From statwiki
Jump to: navigation, search
Line 2: Line 2:
 
The paper <ref>Salakhutdinov, R., & Hinton, G. E. (2007). Learning a nonlinear embedding by preserving class neighbourhood structure. AI and Statistics.</ref> presented here describes a method to learn a nonlinear transformation from the input space to a low-dimensional
 
The paper <ref>Salakhutdinov, R., & Hinton, G. E. (2007). Learning a nonlinear embedding by preserving class neighbourhood structure. AI and Statistics.</ref> presented here describes a method to learn a nonlinear transformation from the input space to a low-dimensional
 
feature space in which K-nearest neighbour classification performs well. As the performance of algorithms like K-nearest neighbours (KNN) that are based on computing distances, the main objective of the proposed algorithm is to learn a good similarity measure that can provide insight into how high-dimensional data is organized. The nonlinear transformation is learned by pre-training and fine-tuning a multilayer neural network. The authors also show how to enhance the performance of non-linear transformation further using unlabeled data. Experimental results on a widely used version of the MNIST handwritten digit recognition task show that proposed algorithm achieves a much lower error rate than SVM or standard backpropagation.
 
feature space in which K-nearest neighbour classification performs well. As the performance of algorithms like K-nearest neighbours (KNN) that are based on computing distances, the main objective of the proposed algorithm is to learn a good similarity measure that can provide insight into how high-dimensional data is organized. The nonlinear transformation is learned by pre-training and fine-tuning a multilayer neural network. The authors also show how to enhance the performance of non-linear transformation further using unlabeled data. Experimental results on a widely used version of the MNIST handwritten digit recognition task show that proposed algorithm achieves a much lower error rate than SVM or standard backpropagation.
 
=Clustering=
 
Clustering refers to partition a given dataset into clusters such that data points in the same cluster are similar and data points in different clusters are dissimilar. Similarity is usually measured over distance between data points.
 
 
Formally stated, given a set of data points <math>X=\{{{\mathbf x}}_1,{{\mathbf x}}_2,\dots ,{{\mathbf x}}_P\}</math>, we would like to find <math>K</math> disjoint clusters <math>{C{\mathbf =}\{C_k\}}_{k\in \{1,\dots ,K\}}</math> such that <math>\bigcup{C_k}=X</math>, that optimizes a certain objective function. The dimensionality of data points is <math>D</math>, and <math>X</math> can be represented as a matrix <math>{{\mathbf X}}_{D\times P}</math>.The similarity matrix that measures the similarity between each pair of points is denoted by <math>{{\mathbf W}}_{P\times P}</math>. A classical similarity matrix for clustering is the diagonally-scaled Gaussian similarity, defined as
 
<br><math>\mathbf W(i,j)= \rm exp (-(\mathbf{x}_i-\mathbf{x}_j)^{\rm T}Diag(\boldsymbol{\alpha})(\mathbf{x}_i-\mathbf{x}_j) ) </math>
 
<br>where <math>{\mathbf \boldsymbol{\alpha} }\in {{\mathbb R}}^D</math> is a vector of positive parameters, and <math>\rm Diag(\boldsymbol{\alpha} )</math> denotes the <math>D\times D</math> diagonal matrix with diagonal <math>{\boldsymbol{\alpha} }</math>.
 
 
=Objective functions=
 
==Objective function for K-means clustering==
 
Given the number of clusters <math>K</math>, it aims to minimize an objective function (sum of within-cluster distance) over all clustering scheme <math>C</math>.
 
<br> <math>\mathop{\min_C} J=\sum^K_{k=1}\sum_{\mathbf x \in C_k}\|\mathbf x - \boldsymbol{\mu}_k\|^2</math>
 
<br> <math>{\boldsymbol{\mu}}_k{\mathbf =}\frac{{\mathbf 1}}{\left|C_k\right|}\sum_{{\mathbf x}\in C_k}{{\mathbf x}}</math> is the mean of the cluster <math>C_k</math>
 
 
==Min cut==
 
For two subsets of <math>A,B\subset X</math>, we define
 
<br> <math>cut(A,B)=\sum_{{\mathbf x}_i \in A}\sum_{{\mathbf x}_j \in B}\mathbf W (i,j)</math>
 
<br> Mincut is the sum of inter-cluster weights.
 
<br> <math>Mincut(C)=\sum^K_{k=1} cut(C_k,X \backslash C_k)</math>
 
 
==Normalized cut==
 
The normalized cut in the paper is defined as
 
<br><math>Ncut(C)=\sum^K_{k=1}\frac{cut(C_k,X\backslash C_k)}{cut(C_k,X)}=\sum^K_{k=1}\frac{cut(C_k,X)-cut(C_k,C_k)}{cut(C_k,X)}=K-\sum^K_{k=1}{\frac{cut(C_k,C_k)}{cut(C_k,X)}}</math>
 
<br>Normalized cut takes a small value if the clusters <math>C_k</math> are not too small <ref> Ulrike von Luxburg, A Tutorial on Spectral Clustering, Technical Report No. TR-149, Max Planck Institute for Biological Cybernetics.</ref> as measured by the intra-cluster weights. So it tries to achieve balanced clusters. There is unlikely that we will have clusters containing one data point.
 
 
==The matrix representation of Normalized cut==
 
Let <math>{{\mathbf e}}_k\in {\{0,1\}}^P</math> be the indicator vector for cluster <math>C_k</math>, where the non-zero elements indicate the data points in cluster <math>C_k</math>. Therefore, knowing <math>{\mathbf E}{\mathbf =}({{\mathbf e}}_1,\dots ,{{\mathbf e}}_K)</math> is equivalent to know clustering scheme <math>C</math>. Further let <math>{\mathbf D}</math> denotes the diagonal matrix whose <math>i</math>-th diagonal element is the sum of the elements in the <math>i</math>-th row of <math>{\mathbf W}</math>, that is, <math>{\mathbf D}{\mathbf =}{\rm Diag(}{\mathbf W}\cdot {\mathbf 1}{\rm )}</math>, where <math>{\mathbf 1}</math> is defined as the vector in <math>{\{1\}}^P</math>.
 
<br>So the normalized cut can be written as
 
<br><math>Ncut(C)=C(\mathbf{W,E})=\sum^K_{k=1}\frac{{\mathbf e}^{\rm T}_k (\mathbf{D-W}){\mathbf e}_k}{{\mathbf e}^{\rm T}_k (\mathbf{D}){\mathbf e}_k}=K-tr(\mathbf {E^{\rm T} W E}(\mathbf {E^{\rm T} D E})^{-1})</math>
 
 
=Spectral Clustering=
 
Solving the problem of Normalized cut is NP-hard, so we turn to the relaxed version of it.
 
 
==Theorem 1==
 
Minimizing normalized cut over all <math>C</math> is equivalent to the following optimization problem (refer as original optimization problem).
 
<br><math>\mathop{\min_{\mathbf Y}}K-tr(\mathbf{Y^{\rm T}(D^{\rm{1/2}}WD^{\rm{1/2}})Y})</math>
 
<br>subject to
 
<br><math>{\mathbf Y}{\mathbf =}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf E}{\mathbf \Lambda }</math> (1a)
 
<br>and
 
<br><math>{{\mathbf Y}}^{{\rm T}}{\mathbf Y}{\mathbf =}{\mathbf I}</math> (1b)
 
<br>Where <math>{\mathbf \Lambda }\in {{\mathbb R}}^{K\times K},{\mathbf Y}\in {{\mathbb R}}^{P\times K}</math>
 
<br>In other words, given <math>{\mathbf E}</math> and let<math>\mathbf{\Lambda =(E^{\rm T} D E)^{\rm{1/2}}}</math>, we can form a candidate solution <math>{\mathbf Y}{\mathbf =}{{\mathbf D}}^{{{\rm 1}}/{{\rm 2}}}{\mathbf E}{\left({{\mathbf E}}^{{\rm T\ }}{\mathbf {D E}}\right)}^{{\mathbf -}{{\rm 1}}/{{\rm 2}}}</math> for the above optimization problem.
 
 
==Relaxed optimization problem==
 
Since minimizing normalized cut is NP-hard problem, its equivalent optimization problem is NP-hard too. However, by removing the constraint (1a) in '''Theorem 1''', a relaxed problem is obtained.
 
 
<br><math>\mathop{\min_{\mathbf Y}}K-tr(\mathbf{Y^{\rm T}(D^{\rm{1/2}}WD^{\rm{1/2}})Y})</math>
 
<br>subject to
 
<br><math>{{\mathbf Y}}^{{\rm T}}{\mathbf Y}{\mathbf =}{\mathbf I}</math>
 
<br>Where <math>{\mathbf Y}\in {{\mathbb R}}^{P\times K}</math>
 
  
 
==References==
 
==References==
 
<references/>
 
<references/>

Revision as of 20:36, 30 June 2009

Introduction

The paper <ref>Salakhutdinov, R., & Hinton, G. E. (2007). Learning a nonlinear embedding by preserving class neighbourhood structure. AI and Statistics.</ref> presented here describes a method to learn a nonlinear transformation from the input space to a low-dimensional feature space in which K-nearest neighbour classification performs well. As the performance of algorithms like K-nearest neighbours (KNN) that are based on computing distances, the main objective of the proposed algorithm is to learn a good similarity measure that can provide insight into how high-dimensional data is organized. The nonlinear transformation is learned by pre-training and fine-tuning a multilayer neural network. The authors also show how to enhance the performance of non-linear transformation further using unlabeled data. Experimental results on a widely used version of the MNIST handwritten digit recognition task show that proposed algorithm achieves a much lower error rate than SVM or standard backpropagation.

References

<references/>