# Difference between revisions of "neighbourhood Components Analysis"

## Introduction

Neighbourhood Components Analysis (NCA) is a method for learning a Mahalnobis distance measure for k-nearest neighbours (KNN). In particular, it finds a distance metric that maximises the leave one out (LOO) error on the training set for a stochastic variant of KNN. NCA can also learn a low-dimensional linear embedding of labelled data for data visualisation and for improved KNN classification speed.

## k-Nearest Neighbours

k-Nearest neighbours is a simple classification technique that determines a test point's label by looking at the labels of the $k$ training points that are nearest the test point. This is a surprisingly effective method that has a non-linear decision surface that is non-parametric, except for the parameter $k$.

However, KNN suffers from two problems. First, it can be computationally expensive to classify points, as they must be compared to the entire training set. There is also the problem of determining which distance metric to define "nearest" points.

NCA attacks the above two problems of KNN. It finds a distance metric that defines which points are nearest. It can restrict this distance metric to be low rank, reducing the dimensionality of the data and thus reducing storage and search times.

NCA finds the matrix $A$ where $Q=A^TA$ and distance between two points is defined as: $d(x,y) = (x - y)^TQ(x-y) = (Ax - Ay)^T(Ax - Ay)$

The basic idea of NCA is to find the distance metric $A$ that maximises the KNN classification on test data. As test data is not available during training, the method uses Leave One Out (LOO) performance. However, KNN has a discontinuous error function as points can suddenly jump class as they cross class boundaries. Instead, NCA optimises using stochastic neighbour assignment. Here, a test point is assigned a neighbour (and the neighbour's class) according to probability $p_{ij}$. This probability decreases as the distance between points increases.

NCA maximises the expected number of correctly classified points according to this objective function:

$f(A) = \sum_i \sum_{j \in C_i} p_{ij} = \sum_i p_i$

The gradient for which is easily found:

$\frac{\partial F}{\partial A} = 2A \sum_i \left( p_i \sum_k p_{ik} x_{ik} x_{ik}^T - \sum_{j \in C_i} p_{ij} x_{ij} x_{ij}^T \right)$

NCA then finds A using a gradient-based optimiser. Note, however, that the cost function is not convex, so the optimisation must handle local minima. The authors comment that they never observed any effects of overfitting in this method, and that performance never decreased as the training set size increased.

It is possible to use a cost function based on KL-divergence, and the authors note that performance is very similar to that achieved through the above cost function.

The scale of $A$ found by NCA, and relative row directions can provide an estimate to the optimal number of neighbours to use for KNN. In Stochastic NN, a larger scale A tends to consult average over fewer neighbours, and vice versa.

## Low Rank Distance Metrics and Nonsquare Projection

NCA allows for linear dimensionality reduction of the input data. This provides computational advantages and is relatively immune to overfitting.

During the optimisation procedure in NCA, the matrix $A$ can be restricted to size $d \times D$, where $d \lt D$. This forces the learned metric to be low rank, and maps all inputs to $\mathbb{R}^d$.

If $d \ll D$, the storage and search times for KNN will be much lower. Storage can be reduced from $O(DN)$ to $O(dN) + dD$ by storing the training set in the lower-dimensional projection. If $d$ is 2 or 3, then projected points can be easily plotted for data visualisation.

The NCA method remains the same as before, optimise the above cost function with respect to $A$, however now $A$ is non-square. Note that NCA itself does not provide suggestions on the optimal size of $d$.

## Experimental Results

The NCA method is evaluated using data from the UCI repository, as well as generated data. It is compared to other Mahalanobis measures for KNN, where it consistently performed better on training data, and test data performance was always at or near the top.

The method was also compared to PCA and LDA for dimensionality reduction. These are rather basic methods, and NCA produced better 2D projections of the data. Using these 2D projections for classification, performed better consistently for both training and test data.

No timing data was provided by the authors.

Figures are available in the original paper.

## Extensions to Continuous Labels and Semi-Supervised Learning

NCA finds a linear transformation to maximise the classification performance on discrete classes. The method can be extended to find a non-linear transformation function $A$ using the same gradient-based optimisation technique. However, the transformation function must be trainable by gradient methods, such as multilayer perceptrons.

It is also possible to adapt the method to support real-valued, continuous class labelling/targets. Using the above objective function and gradients, define the set $C_i$ to be a set of correct matches, that is points with similar class labels to $i$. The objective function is a weighted sum of points, based on similarity of classes. This allows semi-supervised learning, as target classes need to be explicitly specified, just similarity.

## Relationship to Other Methods

here are other Mahalanobis distance metric learning methods for labelled data. Relevant component analysis (RCA) finds a distance metric, but assumes the classes have Gaussian distributions whereas NCA makes no assumption about class distribution. Xing et al's method finds a transformation that minimises the pairwise distances between points of the same class. This assumes that all points of the one class form a single, compact and connected set. Lowe's method is very similar to NCA, but further constrains the distance metric to be diagonal.

For low rank, linear transformation of data, LDA will be optimal if the classes are Gaussian with common covariance. This is not going to be true in general. LDA also has difficulties with mathematical instability when the training set is small, but the dimensionality of the training data is high. There are newer variants of LDA that improve this instability, and also improve robustness to outliers.