Difference between revisions of "kernelized Sorting"

Object matching is a fundamental operation in data analysis. It typically requires the definition of a similarity measure between classes of objects to be matched. Instead, we develop an approach which is able to perform matching by requiring a similarity measure only within each of the classes. This is achieved by maximizing the dependency between matched pairs of observations by means of the Hilbert Schmidt Independence Criterion. This problem can be cast as one of maximizing a quadratic assignment problem with special structure and we present a simple algorithm for finding a locally optimal solution.

Introduction

Problem Statement

Assume we are given two collections of documents purportedly covering the same content, written in two different languages. Can we determine the correspondence between these two sets of documents without using a dictionary?

Sorting and Matching

(Formal) problem formulation:

Given two sets of observations $X= \{ x_{1},..., x_{m} \}\subseteq \mathcal X$ and $Y=\{ y_{1},..., y_{m}\}\subseteq \mathcal Y$

Find a permutation matrix $\pi \in \Pi_{m}$,

$\Pi_{m}:= \{ \pi | \pi \in \{0,1\}^{m \times m} where \pi 1_{m}=1_{m}, \pi^{T}1_{m}=1_{m}\}$

such that $\{ (x_{i},y_{\pi (i)}) for 1 \leqslant i \leqslant m \}$ is maximally dependent. Here $1_{m} \in \mathbb{R}^{m}$ is the vector of all ones.

Denote by $D(Z(\pi))$ a measure of the dependence between x and y. Then we define nonparametric sorting of X and Y as follows

$\pi^{\ast}:=argmax_{\pi \in \prod_{m}}D(Z(\pi)).$

Hilbert Schmidt Independence Criterion

Let sets of observations X and Y be drawn jointly from some probability distribution $Pr_{xy}$. The Hilbert Schmidt Independence Criterion (HSIC) measures the dependence between x and y by computing the norm of the cross-covariance operator over the domain $\mathcal X \times \mathcal Y$ in Hilbert Space.

let $\mathcal {F}$ be the Reproducing Kernel Hilbert Space (RKHS) on $\mathcal {X}$ with associated kernel $k: \mathcal X \times \mathcal X \rightarrow \mathbb{R}$ and feature map $\phi: \mathcal X \rightarrow \mathcal {F}$. Let $\mathcal {G}$ be the RKHS on $\mathcal Y$ with kernel $l$ and feature map $\psi$. The cross-covariance operator $C_{xy}:\mathcal {G}\rightarrow \mathcal {F}$ is defined by

$C_{xy}=\mathbb{E}_{xy}[(\phi(x)-\mu_{x})\otimes (\psi(y)-\mu_{y})],$

where $\mu_{x}=\mathbb{E}[\phi(x)]$, $\mu_{y}=\mathbb{E}[\psi(y)]$.

HSIC is the square of the Hilbert-Schmidt norm of the cross covariance operator $C_{xy}$

$D(\mathcal {F},\mathcal {G},Pr_{xy}):=\parallel C_{xy} \parallel_{HS}^{2}.$

In term of kernels, HSIC can be expressed as

$\mathbb{E}_{xx'yy'}[k(x,x')l(y,y')]+\mathbb{E}_{xx'}[k(x,x')]\mathbb{E}_{yy'}[l(y,y')]-2\mathbb{E}_{xy}[\mathbb{E}_{x'}[k(x,x')]\mathbb{E}_{y}[l(y,y')]].$

where $\mathbb{E}_{xx'yy'}$ is the expectation over both $(x, y)$ ~ $Pr_{xy}$ and an additional pair of variables $(x', y')$ ~ $Pr_{xy}$ drawn independently according to the same law.

A biased estimator of HSIC given finite sample $Z = \{(x_{i}, y_{i})\}_{i=1}^{m}$ drawn from $Pr_{xy}$ is

$D(\mathcal {F},\mathcal {G},Z)=(m-1)^{-2}tr HKHL = (m-1)^{-2} tr \bar{K}\bar{L}$

where $K,L\in \mathbb{R}^{m\times m}$ are the kernel matrices for the data and the labels respectively, $H_{ij}=\delta_{ij}-m^{-1}$ centers the data and the labels in feature space, $\bar{K}:=HKH$ and $\bar{L}:=HLH$ denote the centered versions $K$ and $L$ respectively.

Computing HSIC is simple: only the kernel matrices K and L are needed;

HSIC satisfies concentration of measure conditions, i.e. for random draws of observation from $Pr_{xy}$, HSIC provides values which are very similar;

Incorporating prior knowledge into the dependence estimation can be done via kernels.

Kernelized Sorting

Kernelized Sorting

Optimization problem

$\pi^{\ast}=argmax_{\pi \in \Pi_{m}}[tr \bar{K} \pi^{T}\bar{L}\pi]$

Sorting as a special case

For scalar $x_{i}$ and $y_{i}$ and a linear kernel on both sets, we can rewrite the optimization problem $\pi^{\ast}=argmax_{\pi \in \Pi_{m}}[X^{T} \pi Y]^{2}$

This is maximized by sorting $X$ and $Y$.

Diagonal Dominance

Replace the expectations by sums where no pairwise summation indices are identical. This leads to the objective function:

$\frac{1}{m(m-1)}\sum_{i\ne j}K_{ij}L_{ij}+\frac{1}{m^{2}(m-1)^{2}}\sum_{i\ne j,u\ne v}K_{ij}L_{uv}- \frac{2}{m(m-1)^2}\sum_{i,j\ne i,v\ne i}K_{ij}L_{iv}$

Using the $\bar{K}_{ij}=K_{ij}(1-\delta_{ij})$ and $\bar{L}_{ij}=L_{ij}(1-\delta_{ij})$ for kernel matrices where the main diagonal terms have been removed we arrive at the expression $(m-1)^{-1}tr H\bar{L}H\bar{K}$.

Relaxation to a constrained eigenvalue problem

An approximate solution of the problem by solving

$\text{maximize}_{\eta} \left\{ \eta^{T}M\eta \right\} \text{subject to} A\eta=b$

Here the matrix $M=K\otimes L\in \mathbb{R}^{m^{2}\times{m^2}}$ is given by the outer product of the constituting kernel matrices, $\eta \in \mathbb{R}^{m^2}$ is a vectorized version of the permutation matrix $\pi$, and the constraints imposed by $A$ and $b$ amount to the polytope constraints imposed by $\Pi_{m}$.

Related Work

Mutual Information is defined as, $I(X,Y)=h(X)+h(Y)-h(X,Y)$. We can approximate MI maximization by maximizing its lower bound. This then corresponds to minimizing an upper bound on the joint entropy $h(X,Y)$.

Optimization

$\pi^{\ast}=argmin_{\pi \in \Pi_{m}}|\log HJ(\pi)H|,$

where $J_{ij}=K_{ij}L_{\pi(i),\pi(j)}$. This is related to the optimization criterion proposed by Jebara(2004) in the context of aligning bags of observations by sorting via minimum volume PCA.

Multivariate Extensions

Let there be T random variables $x_i \in {\mathcal X}_i$ which are jointly drawn from some distribution $p(x_1,...x_m)$. The expectation operator with respect to the joint distribution and with respect to the product of the marginals is given by

$\mathbb{E}_{x_1,...,x_T}[\prod_{i=1}^{T}k_{i}(x_{i},\cdot)]$ and $\prod_{i=1}^{T}\mathbb{E}_{x_i}[k_{i}(x_{i},\cdot)]$

respectively. Both terms are equal if and only if all random variables are independent. The squared difference between both is given by

$\mathbb{E}_{x_{i=1}^T,{x'}_{i=1}^{T}}[\prod_{i=1}^{T}k_{i}(x_{i},x_{i}^{'})]+\prod_{i=1}^{T}\mathbb{E}_{x_{i},x_{i}^{'}}[k_{i}(x_{i},x_{i}^{'})]-2\mathbb{E}_{x_{i=1}^{T}}[\prod_{i=1}^{T}\mathbb{E}_{x_{i}^{'}}[k(x_{i},x_{i}^{'})]]$

which we refer to as multiway HSIC.

Denote by $K_{i}$ the kernel matrix obtained from the kernel $k_{i}$ on the set of observations $X_{i}:=\{x_{i1},...,x_{im}\}$, the empirical estimate is given by

$HSIC[X_{1},...,X_{T}]:=1_{m}^{T}(\bigodot_{i=1}^{T}K_{i})1_{m}+\prod_{i=1}^{T}1_{m}^{T}K_{i}1_{m}-2\cdot1_{m}^{T}(\bigodot_{i=1}^{T}K_{i}1_{m})$

where $\bigodot_{t=1}^{T}\ast$ denotes elementwise product of its arguments. To apply this to sorting we only need to define T permutation matrices $\pi_{i} \in \Pi_{m}$ and replace the kernel matrices $K_{i}$ by $\pi_{i}^{T}K_{i}\pi_{i}$.

Optimization

Convex Objective and Convex Domain

Define $\pi$ as a doubly stochastic matrix,

$P_{m}:=\{\pi \in \mathbb{R}^{m \times m} where \pi_{ij}\geqslant 0 and \sum_{i}\pi_{ij}=1 and \sum_{j}\pi_{ij}=1\}$

The objective function $tr K \pi^{T}L\pi$ is convex in $\pi$ .

Convex-Concave Procedure

Compute successive linear lower bounds and maximize $\pi_{i+1}\leftarrow argmax_{\pi \in P_{m}}[tr \bar{K} \pi^{T}\bar{L} \pi_{i}]$

This will converge to a local maximum.

Initialization is done via sorted principal eigenvector.

Summary

We generalize sorting by maximizing dependency between matched pairs of observations via HSIC.

Applications of our proposed sorting algorithm range from data visualization to image, data attribute and multilingual document matching.