stat946f10: Difference between revisions
No edit summary |
No edit summary |
||
Line 15: | Line 15: | ||
<br /> | <br /> | ||
<math> 0 = |\sum_i \Phi(x_i)|^2 = \sum_{ij}\Phi(x_i)\Phi(x_j)=\sum_{ij}K_{ij}. </math> | <math> 0 = |\sum_i \Phi(x_i)|^2 = \sum_{ij}\Phi(x_i)\Phi(x_j)=\sum_{ij}K_{ij}. </math> | ||
<br /> | |||
'''3. Isometry''' <br /> | |||
The local distance between a pairwise of data <math>x_i, x_j</math> should be preserved in new space <math>\Phi(x_i), \Phi(x_j)</math> . In other words, <br /> | |||
<math>|\Phi(x_i) - \Phi(x_j)|^2 = |x_i - x_j|^2. </math> |
Revision as of 18:57, 3 June 2009
Maximum Variance Unfolding AKA Semidefinite Embedding
The main poposal of the technique is to lean a suitable kernel with several constraints when the data is given.
Here is the constraints for the kernel.
1. Semipositive definiteness
Kernel PCA is a kind of spectral decompostion in Hilber space. The semipositive definiteness interprets the kernel matrix as storing the inner products of vectors in a Hilber space. Furthermore, the semipositive definiteness also means all eigenvalues are non-negative.
2. Centering
Considering the centering process in Kernel PCA, it is also required here. The condition is given by
[math]\displaystyle{ \sum_i \Phi(x_i) =0 . }[/math]
Equivalently,
[math]\displaystyle{ 0 = |\sum_i \Phi(x_i)|^2 = \sum_{ij}\Phi(x_i)\Phi(x_j)=\sum_{ij}K_{ij}. }[/math]
3. Isometry
The local distance between a pairwise of data [math]\displaystyle{ x_i, x_j }[/math] should be preserved in new space [math]\displaystyle{ \Phi(x_i), \Phi(x_j) }[/math] . In other words,
[math]\displaystyle{ |\Phi(x_i) - \Phi(x_j)|^2 = |x_i - x_j|^2. }[/math]