stat946f10: Difference between revisions
(Blanked the page) |
No edit summary |
||
Line 1: | Line 1: | ||
'''Maximum Variance Unfolding''' AKA '''Semidefinite Embedding''' | |||
The main poposal of the technique is to lean a suitable kernel with several constraints when the data is given. | |||
Here is the constraints for the kernel. | |||
'''1. Semipositive definiteness''' | |||
Kernel PCA is a kind of spectral decompostion in Hilber space. In functional analysis, we have | |||
''Spectral Theorem for Self-Adjoint Compact Operators'' | |||
''Let <math>A(\cdot)</math> be a self-adjoint, compact operator on an infinite dimensional Hilbert space <math>H</math>. Then, there exists in <math>H</math> a complete orthonormal system <math>\{e_1,e_2,\cdots \}</math> consisting of eigenvectors of <math>A(\cdot)</math>. Moreover, for every <math>x\in H</math>, | |||
<math> | |||
\begin{align*} | |||
x = \sum_{i=1}^{\infty} \left<x, e_i\right> e_i, | |||
\end{align*} | |||
</math> | |||
where <math>\lambda_n</math> is the eigenvalue corresponding to <math>e_i</math>. Furthermore, if <math>A(\cdot)</math> has infinitely many distinct eigenvalues <math>\lambda_1, \lambda_2, \cdots</math>, then <math>\lambda_n \rightarrow 0</math> as <math>n \rightarrow 0</math>.'' |
Revision as of 18:41, 3 June 2009
Maximum Variance Unfolding AKA Semidefinite Embedding
The main poposal of the technique is to lean a suitable kernel with several constraints when the data is given.
Here is the constraints for the kernel.
1. Semipositive definiteness Kernel PCA is a kind of spectral decompostion in Hilber space. In functional analysis, we have Spectral Theorem for Self-Adjoint Compact Operators Let [math]\displaystyle{ A(\cdot) }[/math] be a self-adjoint, compact operator on an infinite dimensional Hilbert space [math]\displaystyle{ H }[/math]. Then, there exists in [math]\displaystyle{ H }[/math] a complete orthonormal system [math]\displaystyle{ \{e_1,e_2,\cdots \} }[/math] consisting of eigenvectors of [math]\displaystyle{ A(\cdot) }[/math]. Moreover, for every [math]\displaystyle{ x\in H }[/math], [math]\displaystyle{ \begin{align*} x = \sum_{i=1}^{\infty} \left\lt x, e_i\right\gt e_i, \end{align*} }[/math] where [math]\displaystyle{ \lambda_n }[/math] is the eigenvalue corresponding to [math]\displaystyle{ e_i }[/math]. Furthermore, if [math]\displaystyle{ A(\cdot) }[/math] has infinitely many distinct eigenvalues [math]\displaystyle{ \lambda_1, \lambda_2, \cdots }[/math], then [math]\displaystyle{ \lambda_n \rightarrow 0 }[/math] as [math]\displaystyle{ n \rightarrow 0 }[/math].