stat946f10: Difference between revisions
No edit summary |
No edit summary |
||
Line 31: | Line 31: | ||
<math> \min\quad rank(K). </math> <br /> | <math> \min\quad rank(K). </math> <br /> | ||
However, to minimize the rank of one matrix is a hard problem. So we look at the question in another way. When doing dimensional reduction, we try to maximize the distance between non-neighbour data. In other words, we want to maximize the variance between non-neighbour datas. In such sense, we can change the objective function to <br /> | However, to minimize the rank of one matrix is a hard problem. So we look at the question in another way. When doing dimensional reduction, we try to maximize the distance between non-neighbour data. In other words, we want to maximize the variance between non-neighbour datas. In such sense, we can change the objective function to <br /> | ||
<math> \max \quad | <math> \max \quad Trace(K) </math> . | ||
<br /> | <br /> | ||
Revision as of 19:27, 3 June 2009
Maximum Variance Unfolding (Semidefinite Embedding)
The main poposal of the technique is to lean a suitable kernel with several constraints when the data is given.
First, we give the constraints for the kernel.
Constraints
1. Semipositive definiteness
Kernel PCA is a kind of spectral decompostion in Hilber space. The semipositive definiteness interprets the kernel matrix as storing the inner products of vectors in a Hilber space. Furthermore, the semipositive definiteness also means all eigenvalues are non-negative.
2. Centering
Considering the centering process in Kernel PCA, it is also required here. The condition is given by
[math]\displaystyle{ \sum_i \Phi(x_i) =0 . }[/math]
Equivalently,
[math]\displaystyle{ 0 = \left|\sum_i \Phi(x_i)\right|^2 = \sum_{ij}\Phi(x_i)\Phi(x_j)=\sum_{ij}K_{ij}. }[/math]
3. Isometry
The local distance between a pairwise of data [math]\displaystyle{ x_i, x_j }[/math], under neighbourhood relation [math]\displaystyle{ \eta }[/math], should be preserved in new space [math]\displaystyle{ \Phi(x_i), \Phi(x_j) }[/math] after mapping. In other words, for all [math]\displaystyle{ \eta_{ij}\gt 0 }[/math],
[math]\displaystyle{ |\Phi(x_i) - \Phi(x_j)|^2 = |x_i - x_j|^2. }[/math]
Additonally, for the consider of conformal map, the neighbourhood relation can be
[math]\displaystyle{ [\eta^T\eta]_{ij}\gt 0. }[/math]
Objective Functions
Given the conditions, the objective functions should be considered. The aim of dimensional reduction is to map high dimension data into a low dimension space with the minimum information losing cost. Recall the fact that the dimension of new space depends on the rank of the kernel. Hence, the best ideal kernel is the one which has minimum rank. So the ideal objective function should be
[math]\displaystyle{ \min\quad rank(K). }[/math]
However, to minimize the rank of one matrix is a hard problem. So we look at the question in another way. When doing dimensional reduction, we try to maximize the distance between non-neighbour data. In other words, we want to maximize the variance between non-neighbour datas. In such sense, we can change the objective function to
[math]\displaystyle{ \max \quad Trace(K) }[/math] .
Algorithm for Optimization Problem
The objective function with linear constraints form a typical semidefinite programming problem. The optimization is convex and globally. We already have methods to slove such kind of optimization problem.