hamming Distance Metric Learning

From statwiki
Jump to navigation Jump to search

Introduction

This paper tries to propose a method to learn mappings from high dimensional data to binary codes. One of the main advantages of using binary space is that one can do exact KNN classification in sublinear time. Like other metric learning methods this paper also tries to optimize some cost function which is based one a similarity measure between data points. One choice of similarity measure in binary space is Euclidean distance which produces unsatisfactory results. Another choice is Hamming distance, which is the total number of positions at which the corresponding bits are different.

The task is to learn a mapping from b(x) that project p-dimensional real valued input x onto a q dimensional binary code while preserving some notion of similarity. This mapping, which is called hash function is parameterized by a matrix w such that:

[math]\displaystyle{ b(x,w)=sign(f(w,x)) }[/math]

In a previous paper, the authors tried to used a loss function which bears some similarity to the hinge function used in SVM. It includes a hyper-parameter which is a threshold in Hamming space that differentiates neighbors from non-neighbors. such that similar points are mapped to binary codes that do differ in more than P bits and disimilar points should map to points closer no more than P bits. For two binary codes [math]\displaystyle{ h }[/math] and [math]\displaystyle{ g }[/math] with hamming distance [math]\displaystyle{ ||h-g||_H }[/math] and a similarity label [math]\displaystyle{ s \in {0,1} }[/math] the pairwise hinge loss function is defined as:

[math]\displaystyle{ l_{pair}(h,g,\rho)= \begin{cases} [||h-g||_H-\rho+1]_{+} & for s=1(similar) \\ {[}\rho-||h-g||_H+1]_+ & for s=0(dissimilar) \end{cases} }[/math]     

where [math]\displaystyle{ {[}\alpha]_{+}=max(\alpha,0) }[/math] and [math]\displaystyle{ \rho }[/math] is a threshold value that separates similar from dissimilar codes.

In practice finding value of [math]\displaystyle{ \rho }[/math] is not easy. Moreover in some datasets the relative pairwise distance is important not the precise numerical value. As a result in this paper authors defined the loss function in terms of the relative similarity. To define relative similarity it is assumed that dataset include triplet of items [math]\displaystyle{ (x,x^+,x^-) }[/math] such that [math]\displaystyle{ x }[/math] is more similar to [math]\displaystyle{ x^+ }[/math] than [math]\displaystyle{ x^- }[/math]. With this assumption the ranking loss on triplet of binary codes [math]\displaystyle{ (h,h^+,h^-) }[/math] is:

[math]\displaystyle{ L_{triple}(h,h^+,h^-)=[||h-h^+||-||h-h^-||+1]_+ }[/math]

Optimization

Given a training set of triplets, [math]\displaystyle{ D={(x_i,x_i^+,x_i^-)}_{i=1}^n }[/math], the objective is to minimize the sum of ranking loss for all training samples and a simple regularizer on the vector of unknown parameters [math]\displaystyle{ w }[/math]:

[math]\displaystyle{ L(w)=\sum_{(x,x^+,x^-) \in D}l_{triple}(b(x,w),b(x^+,w),b(x^-,w))+\frac{\lambda}{2}||w|| }[/math]

which is a discontinuous and non-convex function and optimization is not trivial. The discontinuity is because of the sign function and can be mitigated through construction of upper bound on the empirical loss. To do that one can rewrite function b as following

[math]\displaystyle{ b(x,w)=sign(f(x,w) = \underset{h\in H} {arg\,max} h^Tf(x,w) }[/math]

where [math]\displaystyle{ H=\{-1,+1\}^q }[/math].

Upper bound on empirical loss

The upper bound on loss has the following form:

[math]\displaystyle{ l_{triple}((b(x,w),b(x^+,w),b(x^-,w)) \leq max_{g,g^+,g^-}\{l_{triple}(g,g^+,g^-)+g^Tf(x,w)+g^{+^T}f(x^+,w)+g^{-^T}f(x^-,w)\}-max_h\{h^Tf(x,w)\}-max_{h^+}\{h^{+^T}f(x,w)\} -max_{h^-}\{h^{-^T}f(x,w)\} }[/math]

This upper bound is continuous and piece wise smooth in w as long as f is continuous in w. in particular when [math]\displaystyle{ f }[/math] is linear in [math]\displaystyle{ w }[/math]. the bound becomes piecewise linear and convex.

To use the upper bond for optimization we should be able to find [math]\displaystyle{ (g,g^-,g^+) }[/math] that maximizes the first term of the above equation. There [math]\displaystyle{ 2^{3q} }[/math] possible combinations for codes which makes this optimization challenging. However this optimization can be solved efficiently for the calss of triplet loss functions, such loss function do not depend on the specific binary codes, but rather the differences which can take only [math]\displaystyle{ 2q }[/math] values because it is an integer between [math]\displaystyle{ -q }[/math] and [math]\displaystyle{ q }[/math]. and this optimization can be done in [math]\displaystyle{ O(q^2) }[/math].

Perceptron-like learning

Now to learn the efficient value for [math]\displaystyle{ w }[/math] the authors used a stochastic gradient descent approach in which they first initialize [math]\displaystyle{ w^0 }[/math] randomly and then at iteration this value is updated by following algorithm: 1.Select a random triplet [math]\displaystyle{ (x,x^+,x^-) }[/math] from dataset [math]\displaystyle{ D }[/math]

2.Compute [math]\displaystyle{ (\hat{h},\hat{h}^+,\hat{h}^-)=(b(x,w^t),b(x^+,w^t),b(x^-,w^t)) }[/math]

3.Compute [math]\displaystyle{ (\hat{g},\hat{g}^+,\hat{g}^-) }[/math]

4.Update the model parameters using

[math]\displaystyle{ w^{t+1}=w^{t}+n[\frac{\partial f(x)}{\partial w}(\hat{h}-\hat{g})+\frac{\partial f(x^+)}{\partial w}(\hat{h}^+-\hat{g}^+)+\frac{\partial f(x^-)}{\partial w}(\hat{h}^--\hat{g}^-)-\lambda w^t] }[/math]

where [math]\displaystyle{ n }[/math] is the learning rate and \frac{\partial f(x)}{\partial w} is the transpose of the Jacobian Matrix.