stat946f10: Difference between revisions

From statwiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 5: Line 5:
Here is the constraints for the kernel.
Here is the constraints for the kernel.


'''1. Semipositive definiteness'''
'''1. Semipositive definiteness'''<br />
Kernel PCA is a kind of spectral decompostion in Hilber space. In functional analysis, we have
Kernel PCA is a kind of spectral decompostion in Hilber space. In functional analysis, we have
''Spectral Theorem for Self-Adjoint Compact Operators''<br />
'' '''Spectral Theorem for Self-Adjoint Compact Operators''' ''<br />
''Let <math>A(\cdot)</math> be a self-adjoint, compact operator on an infinite dimensional Hilbert space <math>H</math>. Then, there exists in <math>H</math> a complete orthonormal system <math>\{e_1,e_2,\cdots \}</math> consisting of eigenvectors of <math>A(\cdot)</math>. Moreover, for every <math>x\in H</math>,<br />
''Let <math>A(\cdot)</math> be a self-adjoint, compact operator on an infinite dimensional Hilbert space <math>H</math>. Then, there exists in <math>H</math> a complete orthonormal system <math>\{e_1,e_2,\cdots \}</math> consisting of eigenvectors of <math>A(\cdot)</math>. Moreover, for every <math>x\in H</math>,<br />
<math>
<math>

Revision as of 18:44, 3 June 2009

Maximum Variance Unfolding AKA Semidefinite Embedding

The main poposal of the technique is to lean a suitable kernel with several constraints when the data is given.

Here is the constraints for the kernel.

1. Semipositive definiteness
Kernel PCA is a kind of spectral decompostion in Hilber space. In functional analysis, we have Spectral Theorem for Self-Adjoint Compact Operators
Let [math]\displaystyle{ A(\cdot) }[/math] be a self-adjoint, compact operator on an infinite dimensional Hilbert space [math]\displaystyle{ H }[/math]. Then, there exists in [math]\displaystyle{ H }[/math] a complete orthonormal system [math]\displaystyle{ \{e_1,e_2,\cdots \} }[/math] consisting of eigenvectors of [math]\displaystyle{ A(\cdot) }[/math]. Moreover, for every [math]\displaystyle{ x\in H }[/math],
[math]\displaystyle{ x = \sum_{i=1}^{\infty} \lt x, e_i\gt e_i, }[/math]
where [math]\displaystyle{ \lambda_n }[/math] is the eigenvalue corresponding to [math]\displaystyle{ e_i }[/math]. Furthermore, if [math]\displaystyle{ A(\cdot) }[/math] has infinitely many distinct eigenvalues [math]\displaystyle{ \lambda_1, \lambda_2, \cdots }[/math], then [math]\displaystyle{ \lambda_n \rightarrow 0 }[/math] as [math]\displaystyle{ n \rightarrow 0 }[/math].