neighbourhood Components Analysis: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 7: Line 7:


However, KNN suffers from two problems.  First, it can be computationally expensive to classify points, as they must be compared to the entire training set.  There is also the problem of determining which distance metric to define "nearest" points.
However, KNN suffers from two problems.  First, it can be computationally expensive to classify points, as they must be compared to the entire training set.  There is also the problem of determining which distance metric to define "nearest" points.
== NCA and Stochastic Nearest Neighbours ==


NCA attacks the above two problems of KNN.  It finds a distance metric that defines which points are nearest.  It can restrict this distance metric to be low rank, reducing the dimensionality of the data and thus reducing storage and search times.
NCA attacks the above two problems of KNN.  It finds a distance metric that defines which points are nearest.  It can restrict this distance metric to be low rank, reducing the dimensionality of the data and thus reducing storage and search times.
Line 14: Line 12:
NCA finds the matrix <math>A</math> where <math>Q=A^TA</math> and distance between two points is defined as:
NCA finds the matrix <math>A</math> where <math>Q=A^TA</math> and distance between two points is defined as:
<math>d(x,y) = (x - y)^TQ(x-y) = (Ax - Ay)^T(Ax - Ay)</math>
<math>d(x,y) = (x - y)^TQ(x-y) = (Ax - Ay)^T(Ax - Ay)</math>
The basic idea of NCA is to find the distance metric <math>A</math> that maximises the KNN classification on test data.  As test data is not available during training, the method uses Leave One Out (LOO) performance.  However, KNN has a discontinuous error function as points can suddenly jump class as they cross class boundaries.  Instead, NCA optimises using stochastic neighbour assignment.  Here, a test point is assigned a neighbour (and the neighbour's class) according to probability <math>p_{ij}</math>.  This probability decreases as the distance between points increases.
NCA maximises the expected number of correctly classified points according to this objective function:
<math>f(A) = \sum_i \sum_{j \in C_i} p_{ij} = \sum_i p_i</math>
The gradient for which is easily found:
<math> \frac{\partial F}{\partial A} = 2A \sum_i \left( p_i \sum_k p_{ik} x_{ik} x_{ik}^T - \sum_{j \in C_i} p_{ij} x_{ij} x_{ij}^T \right)</math>
NCA then finds A using a gradient-based optimiser.  Note, however, that the cost function is not convex, so the optimisation must handle local minima.  The authors comment that they never observed any effects of overfitting in this method, and that performance never decreased as the training set size increased.
It is possible to use a cost function based on KL-divergence, and the authors note that performance is very similar to that achieved through the above cost function.
The scale of <math>A</math> found by NCA, and relative row directions can provide an estimate to the optimal number of neighbours to use for KNN.  In Stochastic NN, a larger scale A tends to consult average over fewer neighbours, and vice versa.


== Low Rank Distance Metrics and Nonsquare Projection ==
== Low Rank Distance Metrics and Nonsquare Projection ==

Revision as of 22:50, 28 June 2009

Introduction

Neighbourhood Components Analysis (NCA) is a method for learning a Mahalnobis distance measure for k-nearest neighbours (KNN). In particular, it finds a distance metric that maximises the leave one out (LOO) error on the training set for a stochastic variant of KNN. NCA can also learn a low-dimensional linear embedding of labelled data for data visualisation and for improved KNN classification speed.

k-Nearest Neighbours

k-Nearest neighbours is a simple classification technique that determines a test point's label by looking at the labels of the [math]\displaystyle{ k }[/math] training points that are nearest the test point. This is a surprisingly effective method that has a non-linear decision surface that is non-parametric, except for the parameter [math]\displaystyle{ k }[/math].

However, KNN suffers from two problems. First, it can be computationally expensive to classify points, as they must be compared to the entire training set. There is also the problem of determining which distance metric to define "nearest" points.

NCA attacks the above two problems of KNN. It finds a distance metric that defines which points are nearest. It can restrict this distance metric to be low rank, reducing the dimensionality of the data and thus reducing storage and search times.

NCA finds the matrix [math]\displaystyle{ A }[/math] where [math]\displaystyle{ Q=A^TA }[/math] and distance between two points is defined as: [math]\displaystyle{ d(x,y) = (x - y)^TQ(x-y) = (Ax - Ay)^T(Ax - Ay) }[/math]

The basic idea of NCA is to find the distance metric [math]\displaystyle{ A }[/math] that maximises the KNN classification on test data. As test data is not available during training, the method uses Leave One Out (LOO) performance. However, KNN has a discontinuous error function as points can suddenly jump class as they cross class boundaries. Instead, NCA optimises using stochastic neighbour assignment. Here, a test point is assigned a neighbour (and the neighbour's class) according to probability [math]\displaystyle{ p_{ij} }[/math]. This probability decreases as the distance between points increases.

NCA maximises the expected number of correctly classified points according to this objective function:

[math]\displaystyle{ f(A) = \sum_i \sum_{j \in C_i} p_{ij} = \sum_i p_i }[/math]

The gradient for which is easily found:

[math]\displaystyle{ \frac{\partial F}{\partial A} = 2A \sum_i \left( p_i \sum_k p_{ik} x_{ik} x_{ik}^T - \sum_{j \in C_i} p_{ij} x_{ij} x_{ij}^T \right) }[/math]

NCA then finds A using a gradient-based optimiser. Note, however, that the cost function is not convex, so the optimisation must handle local minima. The authors comment that they never observed any effects of overfitting in this method, and that performance never decreased as the training set size increased.

It is possible to use a cost function based on KL-divergence, and the authors note that performance is very similar to that achieved through the above cost function.

The scale of [math]\displaystyle{ A }[/math] found by NCA, and relative row directions can provide an estimate to the optimal number of neighbours to use for KNN. In Stochastic NN, a larger scale A tends to consult average over fewer neighbours, and vice versa.

Low Rank Distance Metrics and Nonsquare Projection

Experimental Results

Extensions to Continuous Labels and Semi-Supervised Learning

Relationship to Other Methods