summary: Difference between revisions

From statwiki
Jump to navigation Jump to search
No edit summary
Line 1: Line 1:
Dimensionality Reduction by Learning an Invariant Mapping


== 1. Intention ==
The drawbacks of most existing technique:
1 Most of them depend on a meaningful and computable distance metric in input space.
(eg. LLE, Isomap relies on computable distance)
2 They do not compute a “function” that can accurately map new input samples whose relationship to the training data is unknown.
To overcome these drawbacks, this paper introduces a technique called DrLIM. The learning relies solely on neighborhood relationships and does not require any distance measure in the input space.
== 2. Mathematical Model==
Input: A set of vectors <math> I=\{x_1,x_2,......,x_p\} </math>, where <math> x_i\in \mathbb{R}^D, \forall i=1,2,3......,n. </math>
Output: A parametric function <math>G_W:\mathbb{R}^D \rightarrow \mathbb{R}^d </math> with <math> d<<D </math>
The optimization problem of BoostMetric is similar to the large margin nearest neighbor algorithm (LMNN [4]). In the preprocessing step, the labeled training samples are required to be transformed into "triplets" (a <sub>i</sub>, a <sub>j</sub>, a <sub>k</sub>), where a <sub>i</sub> and a <sub>j</sub> are in the same class, but a <sub>i</sub> and a <sub>k</sub> are in different classes. Let us denote dist <sub>i,j</sub>  and dist<sub>i,k</sub>  as the distance between a<sub>i</sub>  and a<sub>j</sub>  and the distance between a<sub>i</sub>  and a<sub>k</sub>  separately. The goal is to maximize the difference between these two distances.
Here the distance is Mahalanobis matrix represented as follows:
<math>dist_{ij}^{2}=\left \| L^Ta_i-L^Ta_j \right \|_2^2=(a_i-a_j)^TLL^T(a_i-a_j)=(a_i-a_j)^TX(a_i-a_j).</math>

Revision as of 19:43, 12 July 2013

Dimensionality Reduction by Learning an Invariant Mapping


1. Intention

The drawbacks of most existing technique:

1 Most of them depend on a meaningful and computable distance metric in input space. (eg. LLE, Isomap relies on computable distance)

2 They do not compute a “function” that can accurately map new input samples whose relationship to the training data is unknown.

To overcome these drawbacks, this paper introduces a technique called DrLIM. The learning relies solely on neighborhood relationships and does not require any distance measure in the input space.

2. Mathematical Model

Input: A set of vectors [math]\displaystyle{ I=\{x_1,x_2,......,x_p\} }[/math], where [math]\displaystyle{ x_i\in \mathbb{R}^D, \forall i=1,2,3......,n. }[/math] Output: A parametric function [math]\displaystyle{ G_W:\mathbb{R}^D \rightarrow \mathbb{R}^d }[/math] with [math]\displaystyle{ d\lt \lt D }[/math]

The optimization problem of BoostMetric is similar to the large margin nearest neighbor algorithm (LMNN [4]). In the preprocessing step, the labeled training samples are required to be transformed into "triplets" (a i, a j, a k), where a i and a j are in the same class, but a i and a k are in different classes. Let us denote dist i,j and disti,k as the distance between ai and aj and the distance between ai and ak separately. The goal is to maximize the difference between these two distances.

Here the distance is Mahalanobis matrix represented as follows:

[math]\displaystyle{ dist_{ij}^{2}=\left \| L^Ta_i-L^Ta_j \right \|_2^2=(a_i-a_j)^TLL^T(a_i-a_j)=(a_i-a_j)^TX(a_i-a_j). }[/math]