hamming Distance Metric Learning: Difference between revisions

From statwiki
Jump to navigation Jump to search
(Created page with "This paper tries to propose a method to learn mappings from high dimensional data to binary codes. One of the main advantages of using binary space is one can do exact KNN classi...")
 
No edit summary
Line 1: Line 1:
This paper tries to propose a method to learn mappings from high dimensional data to binary codes. One of the main advantages of using binary space is one can do exact KNN classification in sublinear time. Like other metric learning method this paper also tries to optimize some cost function which is based one a similarity measure between data points. One choice of similarity measure in binary space is Euclidean distance which produces unsatisfactory results. Another choice is Hamming distance, which is the total number of positions at which the corresponding bits are different.
This paper tries to propose a method to learn mappings from high dimensional data to binary codes. One of the main advantages of using binary space is one can do exact KNN classification in sublinear time. Like other metric learning method this paper also tries to optimize some cost function which is based one a similarity measure between data points. One choice of similarity measure in binary space is Euclidean distance which produces unsatisfactory results. Another choice is Hamming distance, which is the total number of positions at which the corresponding bits are different.
The task is to learn a mapping from b(x) that project p-dimensional real valued input x onto a q dimensional binary code while preserving some notion of similarity. This mapping, which is called hash function is parameterized by a matrix w such that: b(x,w)=sign(...)
The task is to learn a mapping from b(x) that project p-dimensional real valued input x onto a q dimensional binary code while preserving some notion of similarity. This mapping, which is called hash function is parameterized by a matrix w such that:
<math>b(x,w)=sign(f(w,x))</math>
b(x,w)=sign(...)
In a previous paper, the authors tried to used a loss function which bears some similarity to the hinge function used in SVM. It includes a hyper-parameter which is a threshold in Hamming space that differentiates neighbors from non-neighbors. such that similar points are mapped to binary codes that do differ in more than P bits and disimilar points should map to points closer no more than P bits. For two binary codes <math>h</math> and <math>g</math> with hamming distance <math>||h-g||</math> and a similarity label <math>s \in {0,1}</math> the pairwise hinge loss function is defined as:
In a previous paper, the authors tried to used a loss function which bears some similarity to the hinge function used in SVM. It includes a hyper-parameter which is a threshold in Hamming space that differentiates neighbors from non-neighbors. such that similar points are mapped to binary codes that do differ in more than P bits and disimilar points should map to points closer no more than P bits. For two binary codes <math>h</math> and <math>g</math> with hamming distance <math>||h-g||</math> and a similarity label <math>s \in {0,1}</math> the pairwise hinge loss function is defined as:


However in practice finding value of P is not easy
However in practice finding value of P is not easy

Revision as of 15:36, 8 July 2013

This paper tries to propose a method to learn mappings from high dimensional data to binary codes. One of the main advantages of using binary space is one can do exact KNN classification in sublinear time. Like other metric learning method this paper also tries to optimize some cost function which is based one a similarity measure between data points. One choice of similarity measure in binary space is Euclidean distance which produces unsatisfactory results. Another choice is Hamming distance, which is the total number of positions at which the corresponding bits are different. The task is to learn a mapping from b(x) that project p-dimensional real valued input x onto a q dimensional binary code while preserving some notion of similarity. This mapping, which is called hash function is parameterized by a matrix w such that:

[math]\displaystyle{ b(x,w)=sign(f(w,x)) }[/math]

b(x,w)=sign(...) In a previous paper, the authors tried to used a loss function which bears some similarity to the hinge function used in SVM. It includes a hyper-parameter which is a threshold in Hamming space that differentiates neighbors from non-neighbors. such that similar points are mapped to binary codes that do differ in more than P bits and disimilar points should map to points closer no more than P bits. For two binary codes [math]\displaystyle{ h }[/math] and [math]\displaystyle{ g }[/math] with hamming distance [math]\displaystyle{ ||h-g|| }[/math] and a similarity label [math]\displaystyle{ s \in {0,1} }[/math] the pairwise hinge loss function is defined as:

However in practice finding value of P is not easy