kernelized Locality-Sensitive Hashing: Difference between revisions
mNo edit summary |
|||
Line 27: | Line 27: | ||
Note that the positive definiteness assumption of <math>\Sigma</math> does not always hold making it impossible to do such a transformation in that case. | Note that the positive definiteness assumption of <math>\Sigma</math> does not always hold making it impossible to do such a transformation in that case. | ||
=== Kernel Centering === | === Kernel Centering === | ||
Line 57: | Line 55: | ||
== Related Methods == | == Related Methods == | ||
=== Johnson-Lindenstrauss Lemma === | |||
=== Random Projection === | |||
=== Spectral Hashing === | === Spectral Hashing === | ||
=== Random Fourier Features === | === Random Fourier Features === |
Revision as of 14:51, 8 July 2013
Locality Sensitive Hashing (LSH) is a form of dimension reduction that finds embedding of high dimensional data into a low dimensional hamming space while providing probabilistic collision guarantees. That is, similar data points will have the same low dimensional mapping with high probability. One immediate application of LSH is large scale nearest neighbour search/classification. A large database of objects (e.g. images) can be partitioned into disjoint buckets so that objects of a single bucket share the same low dimensional representation which is used as a key to that bucket. At query time, the low dimensional representation of the query object determines a single bucket of “the most probably similar objects” which can then be searched in the traditional way.
For each similarity measure, a locality sensitive hashing method has to be designed carefully to ensure practically acceptable probabilistic collision guarantees. It was also noticed that most of the previous work done on LSH assumes that data points come from multidimensional vector space and the underlying embedding is explicitly known. However, that is not always the case. For example the RBF kernel maps the data to an infinite dimensional space which is intractable to explicitly work with. This paper generalizes locality sensitive hashing by proposing a fully kernelized method that provides the desirable collision guarantees.
Preliminaries
Central Limit Theorem
Suppose [math]\displaystyle{ \mathcal{D} }[/math] is a multivariate distribution with mean [math]\displaystyle{ \mu }[/math] and covariance [math]\displaystyle{ \Sigma }[/math]. Let [math]\displaystyle{ x_1, x_2, ..., x_t }[/math] be [math]\displaystyle{ t }[/math] random vectors sampled i.i.d from [math]\displaystyle{ \mathcal{D} }[/math]. The central limit theorem tells us that for sufficiently large [math]\displaystyle{ t }[/math], the random vector
[math]\displaystyle{ z_t = \sqrt{t}(\bar{x}_t - \mu) }[/math]
approximately follows a multivariate Gaussian distribution [math]\displaystyle{ \mathcal{N}(0,\Sigma) }[/math], where [math]\displaystyle{ \bar{x}_t = \frac{1}{t} \sum_i x_i }[/math]
Whitening Transform
If [math]\displaystyle{ x }[/math] is a random vector sampled from some multivariate distribution [math]\displaystyle{ \mathcal{D} }[/math] of a zero mean and covariance [math]\displaystyle{ \Sigma }[/math] then, the random vector [math]\displaystyle{ \Sigma^{-1/2}x }[/math] is of an identity covariance matrix.
[math]\displaystyle{ \Sigma }[/math] is defined as [math]\displaystyle{ \Sigma = E[xx^T] }[/math].
If [math]\displaystyle{ \Sigma }[/math] is positive definite, It will have an eigen decomposition in the form [math]\displaystyle{ \Sigma = V \Lambda V^T }[/math] with all eigenvalues [math]\displaystyle{ \gt 0 }[/math] (note that [math]\displaystyle{ \Sigma }[/math] is always symmetric positive semidefinite). Consequently, [math]\displaystyle{ \Sigma^{-1/2} = V \Lambda ^{-1/2} V^T }[/math]. The covariance of the random vector [math]\displaystyle{ \Sigma^{-1/2}x }[/math] is then,
[math]\displaystyle{ \tilde{\Sigma} = E[V \Lambda ^{-1/2} V^T x x^T V \Lambda ^{-1/2} V^T] = V \Lambda ^{-1/2} V^T E[xx^T] V \Lambda ^{-1/2} V^T = V \Lambda ^{-1/2} V^T \Sigma V \Lambda ^{-1/2} V^T = I }[/math]
Note that the positive definiteness assumption of [math]\displaystyle{ \Sigma }[/math] does not always hold making it impossible to do such a transformation in that case.
Kernel Centering
Given [math]\displaystyle{ n }[/math] data points that are accessible only through a kernel function and we are interested in the kernel matrix of a centered version of data points (with the mean subtracted from all points). Let the explicit data representation in the kernel space be [math]\displaystyle{ \Phi \in \mathbb{R}^{d \times n} }[/math]. An explicit centering of [math]\displaystyle{ \Phi }[/math] is given by [math]\displaystyle{ \Phi-\frac{1}{n}\textbf{e}\textbf{e}^T\Phi }[/math], where [math]\displaystyle{ \textbf{e} }[/math] is a vector of all ones of size [math]\displaystyle{ n }[/math].
Computing the kernel of the centered data:
[math]\displaystyle{
K_{cnt} = (\Phi-\frac{1}{n}\textbf{e}\textbf{e}^T\Phi)^T(\Phi-\frac{1}{n}\textbf{e}\textbf{e}^T\Phi)
= \Phi^T\Phi - \frac{1}{n}\textbf{e}\textbf{e}^T\Phi^T\Phi -\frac{1}{n}\Phi^T\Phi\textbf{e}\textbf{e}^T + \frac{1}{n^2}\textbf{e}\textbf{e}^T\Phi^T\Phi\textbf{e}\textbf{e}^T
= K - \frac{1}{n}\textbf{e}\textbf{e}^TK -\frac{1}{n}K\textbf{e}\textbf{e}^T + \frac{\textbf{e}^TK\textbf{e}}{n^2}\textbf{e}\textbf{e}^T
}[/math]
where [math]\displaystyle{ K }[/math] is the kernel matrix of the original data points.
Locality Sensitive Hashing
Kernelized Locality Sensitive Hashing
Empirical Results
Discussion and Critique
Related Methods
Johnson-Lindenstrauss Lemma
Random Projection
Spectral Hashing
Random Fourier Features
References
1. B. Kulis, K. Grauman, "Kernelized Locality-Sensitive Hashing," In IEEE Transactions on Pattern Analysis and Machine Intelligence, June 2012.