self-Taught Learning: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 37: Line 37:
* <math>\,s</math> is the number of bases that are used in the new representation. <math>\,s</math> can be larger than <math>\,n</math>
* <math>\,s</math> is the number of bases that are used in the new representation. <math>\,s</math> can be larger than <math>\,n</math>


The first term in the cost function has the purpose of creating <math>\,b_j</math> and <math>a_j^{(i)}</math> such that each unlabeled point can be expressed as a linear combination of these bases and activations (weights). The <math>\,\beta||a^{(i)}||_1</math> term acts as a penalty to ensure that the activation terms are sparse. REF.  
The first term in the cost function has the purpose of creating <math>\,b_j</math> and <math>a_j^{(i)}</math> such that each unlabeled point can be expressed as a linear combination of these bases and activations (weights). The <math>\,\beta||a^{(i)}||_1</math> term acts as a penalty to ensure that the activation terms are sparse. REF. Intuitively, this means that each data


This problem is convex in either <math>\,\textbf{a}</math> or <math>\,\textbf{b}</math> if the other one is constant. Since each of these cases can be solved efficiently , a solution to this optimization problem can be found by alternating between optimizing <math>\,\textbf{a}</math> and <math>\,\textbf{b}</math>, while fixing the other.
This problem is convex in either <math>\,\textbf{a}</math> or <math>\,\textbf{b}</math> if the other one is constant. Since each of these cases can be solved efficiently , a solution to this optimization problem can be found by alternating between optimizing <math>\,\textbf{a}</math> and <math>\,\textbf{b}</math>, while fixing the other.

Revision as of 18:17, 11 November 2010

Introduction and Intuitive Idea

Illustration of classification paradigms. And orange outline indicates that data is labeled.

Self-taught learning is a new paradigm in machine learning introduced by Stanford researchers in 2007 REF. It builds on ideas from existing supervised, semi-supervised and transfer learning algorithms. The differences in these methods depend on the usage of labeled and unlabeled data:

  • Supervised Learning - All data is labeled and of the same type (shares the same class labels).
  • Semi-supervised learning - Only some of the data is labeled but it is all of the same type.
  • Transfer learning - All data is labeled but some is of some other type (i.e. has class labels that do not apply to data set that we wish to classify).

Self-taught learning combines the latter two ideas. It uses labeled data belonging the desired classes and unlabeled data from other, somehow similar, classes. It is important to emphasize that the unlabeled data need not belong to the class labels we wish to assign, as long as it is not usage of unlabeled and related data, it can be thought of as semi-supervised transfer learning.

The additional unlabeled data can be used to learn a "higher-level feature representation on the inputs" REF. After this representation is found, data can be analyzed and classified in this new space.

The main advantage of this approach is that unlabeled similar data is often easier and cheaper to obtain than labeled data belonging to our classes. Additionally, it often shares characteristics with data belonging to the desired classes that can be useful in supervised learning. For example, in the context of image classification, unlabeled images can be used to learn edges so that labeled images can be constructed as a linear combination of these base images.

Self-Taught Learning Algorithm

The self-taught learning algorithm can be summarized as follows:

  1. Use unlabeled data to construct a new representation (typically a sparse high-dimensional one)
  2. Express labeled data in this new representation
  3. Use existing classification methods in this new space

There is much freedom as to how to accomplish each of the three steps, so there is a lot of room for new techniques to be developed in this area. However, a specific algorithm will be discussed for each step.

Theory

Problem Specification

Formally, suppose we have [math]\displaystyle{ \,m }[/math] labeled training points [math]\displaystyle{ \{(x_{\ell}^{(i)}, y^{(i)}), i = 1,\dots,m\} }[/math], where [math]\displaystyle{ x_{\ell}^{(i)} \in \mathbb{R}^n }[/math] and [math]\displaystyle{ \,y^{(i)} }[/math] is a class label. Further assume that the data are independently and identically taken from some distribution. In addition, we also have a set of unlabeled data [math]\displaystyle{ \{x_u^{(i)},i=1,\dots,k\} }[/math], [math]\displaystyle{ x_u^{i} \in \mathbb{R}^n }[/math]. It is not required that this data comes from the same distribution, which differentiates this method from semi-supervised learning; however, the data should somehow be relevant, as it is with transfer learning.

Constructing Bases Using Unlabeled Data

There are numerous techniques that can potentially be used to construct a new representation of the data. A particular one, the sparse-coding algorithm by Olshausen and Field <ref name="OF1996">B. A. Olshausen and D.J. Field. (1996) "Emergence of simple-cell receptive field properties by learning a sparse code for natural images". Nature, 381:607-609.</ref>, will be discussed here. This method aims to solve the optimization problem

[math]\displaystyle{ \min_{\textbf{b},\textbf{a}} \sum_{i=1}^k || x_u^{(i)} = \sum_{j=1}^s a_j^{(i)}b_j||^2_2 + \beta||a^{(i)}||_1 \,\,\,\,\,(1) }[/math]

subject to the constraints [math]\displaystyle{ ||b_j||_2 \leq 1 }[/math] for all [math]\displaystyle{ j=1,\dots,s }[/math]. Note that:

  • [math]\displaystyle{ \textbf{b} = \{b_1,\dots,b_s\} }[/math] are 'basis vectors', with [math]\displaystyle{ b_j \in \mathbb{R}^n }[/math]
  • [math]\displaystyle{ \textbf{a} = \{a^{(1)},\dots, a^{(k)}\} }[/math] are 'activations' with each [math]\displaystyle{ a^{(i)} \in \mathbb{R}^s }[/math]
  • [math]\displaystyle{ \,s }[/math] is the number of bases that are used in the new representation. [math]\displaystyle{ \,s }[/math] can be larger than [math]\displaystyle{ \,n }[/math]

The first term in the cost function has the purpose of creating [math]\displaystyle{ \,b_j }[/math] and [math]\displaystyle{ a_j^{(i)} }[/math] such that each unlabeled point can be expressed as a linear combination of these bases and activations (weights). The [math]\displaystyle{ \,\beta||a^{(i)}||_1 }[/math] term acts as a penalty to ensure that the activation terms are sparse. REF. Intuitively, this means that each data

This problem is convex in either [math]\displaystyle{ \,\textbf{a} }[/math] or [math]\displaystyle{ \,\textbf{b} }[/math] if the other one is constant. Since each of these cases can be solved efficiently , a solution to this optimization problem can be found by alternating between optimizing [math]\displaystyle{ \,\textbf{a} }[/math] and [math]\displaystyle{ \,\textbf{b} }[/math], while fixing the other.

Projecting Labeled Data in New Bases and Classification

Once the optimization problem is solved, the set of bases can be used on labeled data (recall that the bases were constructed using unlabeled data). That is, given the bases [math]\displaystyle{ b_1,\dots,b_s }[/math], for each [math]\displaystyle{ x_{\ell}^{(i)} }[/math], we need to find the corresponding weight $\hat{a}(x_{\ell}^{(i)})$. This is achieved as follows:

[math]\displaystyle{ \hat{a}(x_{\ell}^{(i)}) = \arg\min_{a^{(i)}} \sum_{i=1}^k || x_{\ell}^{(i)} = \sum_{j=1}^s a_j^{(i)}b_j||^2_2 + \beta||a^{(i)}||_1 \,\,\,\,\,(2) }[/math]

This is also a convex problem with an efficient solution, yielding an approximate representation of our point in the constructed bases. Note that due to the second term, the activation vector is sparse.

Once the data is in the new representation, standard supervised methods can be used to classify the data.

A Classifier Specific to Sparse-Coding

In the algorithm described above, once the data is represented in the new bases typical classifiers can be used. However, it might be advantageous to create a classifier specific to sparse-coding with hopes of improving accuracy. Specifically, there is a natural kernel that can be used to measure similarity given that the data lies in a space constructed with sparse-coding.

Namely, if we assume that [math]\displaystyle{ \,x }[/math] has Gaussian noise [math]\displaystyle{ \,\eta }[/math] (i.e. [math]\displaystyle{ \eta \sim N(\mu, \sigma^2) }[/math] and [math]\displaystyle{ P(x = \sum_j a_j b_j + \eta|b,a) \propto \exp(-||\eta||^2_2/2\sigma^2) }[/math]) and impose a Laplacian prior on [math]\displaystyle{ \,\mathbf{a} }[/math] (typical for sparseness) with [math]\displaystyle{ P(a) \propto \exp(-\beta \sum_j |a_j|) }[/math]REF then expression (1) is a linear generative model aiming to learn [math]\displaystyle{ \,\mathbf{b} }[/math].

Therefore, a Fisher kernel can be used to measure similarity in the new representation. REF. The algorithm is as follows:

  • Use (1) and unlabeled to learn [math]\displaystyle{ \,\mathbf{b} }[/math]
  • Compute features [math]\displaystyle{ \hat{\mathbf{a}}(x) }[/math] by solving (2)

Results and Discussion

This method of learning applies best to classification of images, text, and sound, so most discussion will focus on classification in this context.

One natural first comparison might be look at this method against using PCA on the unlabeled data to construct a new bases. PCA differs in two major ways: it looks for linear structure and requires that new bases be orthogonal (thus limiting the number of dimensions in the new space to, at most, the number of dimensions in the original data). So, it is hard for PCA to form bases that represent the basic patterns that underlie data; however, this is not a problem for sparse-coding.

Experimental Results

References

<references />