User:Dgroot/Oct.1 lecture

From statwiki
Revision as of 18:21, 3 October 2014 by Dgroot (talk | contribs) (Created page with "''Temporary page to avoid edit conflicts'' ==Oct. 1 lecture== ===Maximum variance unfolding=== In maximum variance unfolding (MVU), as in isomap and LLE, we operate under the as...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Temporary page to avoid edit conflicts

Oct. 1 lecture

Maximum variance unfolding

In maximum variance unfolding (MVU), as in isomap and LLE, we operate under the assumption that the data is sampled from a low-dimensional manifold that is embedded within a higher-dimensional Euclidean space. The aim is to "unfold" this manifold in a lower-dimensional Euclidean space such that the local structure of each point is preserved for each point's [math]\displaystyle{ k }[/math] nearest neighbours, while maximizing the pairwise variance for all points which are not neighbours. The MVU algorithm uses semidefinite programming to find a kernel that satisfies this aim.

Consider a mapping [math]\displaystyle{ \Phi }[/math] such that [math]\displaystyle{ x \mapsto \phi(x) }[/math].

Then for [math]\displaystyle{ i, j }[/math] such that [math]\displaystyle{ x_i, x_j }[/math] are neighbours, we require that [math]\displaystyle{ \Phi }[/math] satisfies

[math]\displaystyle{ | x_i - x_j |^2 = |\phi(x_i) - \phi(x_j)|^2 }[/math]
[math]\displaystyle{ = [\phi(x_i) - \phi(x_j)]^T [\phi(x_i) - \phi(x_j)] }[/math]
[math]\displaystyle{ =\phi(x_i)^T \phi(x_i) - \phi(x_i)^T \phi(x_j) - \phi(x_j)^T \phi(x_i) + \phi(x_j)^T \phi(x_j) }[/math]
[math]\displaystyle{ =k_{ii} - k_{ij} - k_{ji} + k_{jj} }[/math]
[math]\displaystyle{ =k_{ii} -2k_{ij} + k_{jj} }[/math]

where [math]\displaystyle{ k=\phi^T \phi }[/math] is the [math]\displaystyle{ n \times n }[/math] matrix corresponding to the kernel of [math]\displaystyle{ \Phi }[/math].