Difference between revisions of "hamming Distance Metric Learning"

From statwiki
Jump to: navigation, search
Line 8: Line 8:
  [||h-g||_H-\rho+1]_{+} & for s=1(similar) \\  
  [||h-g||_H-\rho+1]_{+} & for s=1(similar) \\  
   [ \rho-||h-g||& for s=0(dissimilar)
   [ p-||h-g||_{H}+1]_{+}& for s=0(dissimilar)

Revision as of 00:16, 11 July 2013


This paper tries to propose a method to learn mappings from high dimensional data to binary codes. One of the main advantages of using binary space is that one can do exact KNN classification in sublinear time. Like other metric learning methods this paper also tries to optimize some cost function which is based one a similarity measure between data points. One choice of similarity measure in binary space is Euclidean distance which produces unsatisfactory results. Another choice is Hamming distance, which is the total number of positions at which the corresponding bits are different.

The task is to learn a mapping from b(x) that project p-dimensional real valued input x onto a q dimensional binary code while preserving some notion of similarity. This mapping, which is called hash function is parameterized by a matrix w such that:

In a previous paper, the authors tried to used a loss function which bears some similarity to the hinge function used in SVM. It includes a hyper-parameter which is a threshold in Hamming space that differentiates neighbors from non-neighbors. such that similar points are mapped to binary codes that do differ in more than P bits and disimilar points should map to points closer no more than P bits. For two binary codes [math]h[/math] and [math]g[/math] with hamming distance [math]||h-g||_H[/math] and a similarity label [math]s \in {0,1}[/math] the pairwise hinge loss function is defined as:

[math] l_{pair}(h,g,\rho)= \begin{cases} [||h-g||_H-\rho+1]_{+} & for s=1(similar) \\ [ p-||h-g||_{H}+1]_{+}& for s=0(dissimilar) \end{cases} [/math]     

[math] h_r(x) = \begin{cases} 1 & r^Tx \ge 0 \\ 0 & \text{otherwise} \end{cases} [/math]      (1)

However in practice finding value of [math]\rho[/math] is not easy. Moreover in some datasets the relative pairwise distance is important not the precise numerical value. As a results in this paper authors define the loss function in terms of the relative similarity. To define relative similarity it is assumed that dataset include triplet of items [math](x,x^+,x^-)[/math] such that [math]x[/math] is more similar to [math]x^+[/math] than [math]x^-[/math]. With this assumption the ranking loss on triplet of binary codes [math](h,h^+,h^-)[/math] is:



Given a training set of triplets, [math]D={(x_i,x_i^+,x_i^-)}_{i=1}^n[/math], the objective is to minimize the sum of ranking loss for all training samples and a simple regularizer on the vector of unknown parameters [math]w[/math]:

[math]L(w)=\sum_{(x,x^+,x^-) \in D}l_{triple}(b(x,w),b(x^+,w),b(x^-,w))+\frac{\lambda}{2}||w||[/math]
which is a discontinuous and non-convex function and optimization is not trivial. The discontinuity is because of the sign function and can be mitigated through construction of upper bound on the empirical loss. To do that one can rewrite function b as following
[math]b(x,w)=sign(f(x,w) = \underset{h\in H} {arg\,max} h^Tf(x,w)[/math]

where [math]H=\{-1,+1\}^q[/math].

Upper bound on empirical loss

The upper bound on loss has the following form:

[math]l_{triple}((b(x,w),b(x^+,w),b(x^-,w)) \leq max_{g,g^+,g^-}\{l_{triple}(g,g^+,g^-)+g^Tf(x,w)+g^{+^T}f(x^+,w)+g^{-^T}f(x^-,w)\}-max_h\{h^Tf(x,w)\}-max_{h^+}\{h^{+^T}f(x,w)\} -max_{h^-}\{h^{-^T}f(x,w)\}[/math]

This upper bound is continuous and piece wise smooth in w as long as f is continuous in w. in particular when [math]f[/math] is linear in [math]w[/math]. the bound becomes piecewise linear and convex.

To use the upper bond for optimization we should be able to find [math](g,g^-,g^+)[/math] that maximizes the first term of the above equation. There [math]2^{3q}[/math] possible combinations for codes which makes this optimization challenging. However this optimization can be solved efficiently for the calss of triplet loss functions, such loss function do not depend on the specific binary codes, but rather the differences which can take only [math]2q[/math] values because it is an integer between [math]-q[/math] and [math]q[/math]. and this optimization can be done in [math]O(q^2)[/math].

Perceptron-like learning

Now to learn the efficient value for [math]w[/math] the authors used a stochastic gradient descent approach in which they first initialize [math]w^0[/math] randomly and then at iteration this value is updated by following algorithm: 1.Select a random triplet [math](x,x^+,x^-)[/math] from dataset [math]D[/math]

2.Compute [math](\hat{h},\hat{h}^+,\hat{h}^-)=(b(x,w^t),b(x^+,w^t),b(x^-,w^t))[/math]

3.Compute [math](\hat{g},\hat{g}^+,\hat{g}^-)[/math]

4.Update the model parameters using
[math]w^{t+1}=w^{t}+n[\frac{\partial f(x)}{\partial w}(\hat{h}-\hat{g})+\frac{\partial f(x^+)}{\partial w}(\hat{h}^+-\hat{g}^+)+\frac{\partial f(x^-)}{\partial w}(\hat{h}^--\hat{g}^-)-\lambda w^t][/math]

where [math]n[/math] is the learning rate and \frac{\partial f(x)}{\partial w} is the transpose of the Jacobian Matrix.