self-Taught Learning: Difference between revisions

From statwiki
Jump to navigation Jump to search
m (Conversion script moved page Self-Taught Learning to self-Taught Learning: Converting page titles to lowercase)
 
(18 intermediate revisions by one other user not shown)
Line 1: Line 1:
<div style="color: #aa0000; ;border: 3px solid red; padding: 1em; margin: auto; width: 12%; ">Under Construction!</div>
==Introduction and Motivation==
 
 
 
 
 
=Introduction and Motivation=


[[File:Algs.jpg|thumb|right|'''Figure 1''': Illustration of classification paradigms. An orange outline indicates that data is labeled. Diagram from <ref name="Raina"/>]]
[[File:Algs.jpg|thumb|right|'''Figure 1''': Illustration of classification paradigms. An orange outline indicates that data is labeled. Diagram from <ref name="Raina"/>]]


Self-taught learning is a new paradigm in machine learning introduced by Stanford researchers in 2007 <ref name="Raina">Raina et al, (2007). "Self-taught Learning: Transfer Learning from Unlabeled Data". ''Proceedings of the 24th International Conference on Machine Learning, Corvallis'', OR.</ref>. The full paper can be found [http://www.stanford.edu/~hllee/icml07-selftaughtlearning.pdf here]. It builds on ideas from existing [http://en.wikipedia.org/wiki/Supervised_learning| supervised], [http://en.wikipedia.org/wiki/Semi-Supervised_learning| semi-supervised] and [http://en.wikipedia.org/wiki/Inductive_transfer| transfer] learning algorithms. The differences between these methods depend on the usage of labeled and unlabeled data ('''Figure 1'''):  
Self-taught learning is a new paradigm in machine learning introduced by Stanford researchers in 2007 <ref name="Raina">Raina et al, (2007). "Self-taught Learning: Transfer Learning from Unlabeled Data". ''Proceedings of the 24th International Conference on Machine Learning, Corvallis'', OR.</ref>. The full paper can be found [http://www.stanford.edu/~hllee/icml07-selftaughtlearning.pdf here]. It builds on ideas from existing [http://en.wikipedia.org/wiki/Supervised_learning supervised], [http://en.wikipedia.org/wiki/Semi-supervised_learning  semi-supervised] and [http://en.wikipedia.org/wiki/Inductive_transfer transfer] learning algorithms. The differences between these methods depend on the usage of labeled and unlabeled data ('''Figure 1'''):  
*Supervised Learning - All data is labeled and of the same type (shares the same class labels).
*Supervised Learning - All data is labeled and of the same type (shares the same class labels).
*Semi-supervised learning - Only some of the data is labeled but it is all of the same class. One drawback is that acquiring unlabeled data of the same class is often difficult and/or expensive.
*Semi-supervised learning - Only some of the data is labeled but it is all of the same class. One drawback is that acquiring unlabeled data of the same class is often difficult and/or expensive.
*Transfer learning - All data is labeled but some is of some other type (i.e. has class labels that do not apply to data set that we wish to classify).
*Transfer learning - All data is labeled but some is of another type (i.e. has class labels that do not apply to data set that we wish to classify).


Self-taught learning combines the latter two ideas. It uses labeled data belonging the desired classes and unlabeled data from other, somehow similar, classes. It is important to emphasize that the unlabeled data need not belong to the class labels we wish to assign, as long as it related. This fact distinguishes it from semi-supervised learning. Since it uses unlabeled data from new classes, it can be thought of as semi-supervised transfer learning.  
Self-taught learning combines the latter two ideas. It uses labeled data belonging to the desired classes and unlabeled data from other, somehow similar, classes. It is important to emphasize that the unlabeled data need not belong to the class labels we wish to assign, as long as it is related. This fact distinguishes it from semi-supervised learning. Since it uses unlabeled data from new classes, it can be thought of as semi-supervised transfer learning.  


The additional unlabeled data can be used to learn a "higher-level feature representation on the inputs" <ref name="Raina"/>. After this representation is found, data can be analyzed and classified in this new space. In other words, the unlabeled data helps with dimension 'reduction' (in reality, the new space is often of higher dimension, but the representation is [http://en.wikipedia.org/wiki/Sparse_matrix| sparse]) and labeled data helps with classification in the new representation.  
The additional unlabeled data can be used to learn a "higher-level feature representation on the inputs" <ref name="Raina"/>. After this representation is found, data can be analyzed and classified in this new space. In other words, the unlabeled data helps with dimension 'reduction' (in reality, the new space is often of higher dimension, but the representation is [http://en.wikipedia.org/wiki/Sparse_matrix sparse]) and labeled data helps with classification in the new representation.  


The main advantage of this approach is that unlabeled similar data is often easier and cheaper to obtain than labeled data belonging to our classes. Additionally, the data from other classes often shares characteristics with data belonging to the desired classes that can be useful in supervised learning. For example, in the context of image classification, unlabeled images can be used to learn edges so that labeled images can be constructed as a linear combination of these base images. Classification of sound and text are also applications that self-taught learning can be applied to.
The main advantage of this approach is that unlabeled similar data is often easier and cheaper to obtain than labeled data belonging to our classes. Additionally, the data from other classes often shares characteristics with data belonging to the desired classes that can be useful in supervised learning. For example, in the context of image classification, unlabeled images can be used to learn edges so that labeled images can be constructed as a linear combination of these base images. Classification of sound and text are also applications that self-taught learning can be applied to.


=Self-Taught Learning Algorithm=
==Self-Taught Learning Algorithm==
The self-taught learning algorithm can be summarized as follows:
The self-taught learning algorithm can be summarized as follows:
# Use unlabeled data to construct a new representation (typically a sparse high-dimensional one)
# Use unlabeled data to construct a new representation (typically a sparse high-dimensional one)
Line 34: Line 28:


===Constructing Bases Using Unlabeled Data===
===Constructing Bases Using Unlabeled Data===
There are numerous techniques that can potentially be used to construct a new representation of the data. A particular one, the [http://en.wikipedia.org/wiki/Sparse_coding| sparse-coding] algorithm by Olshausen and Field <ref name="OF1996"> Olshausen, B. A., & Field, D. J.. (1996) "Emergence of simple-cell receptive field properties by learning a sparse code for natural images". ''Nature'', ''381'',607-609.</ref>, will be discussed here. This method aims to solve the optimization problem
There are numerous techniques that can potentially be used to construct a new representation of the data. A particular one, the [http://en.wikipedia.org/wiki/Sparse_coding sparse-coding] algorithm by Olshausen and Field <ref name="OF1996"> Olshausen, B. A., & Field, D. J.. (1996) "Emergence of simple-cell receptive field properties by learning a sparse code for natural images". ''Nature'', ''381'',607-609.</ref>, will be discussed here. This method aims to solve the optimization problem


<center><math> \min_{\textbf{b},\textbf{a}} \sum_{i=1}^k || x_u^{(i)} = \sum_{j=1}^s a_j^{(i)}b_j||^2_2 + \beta||a^{(i)}||_1 \,\,\,\,\,(1)</math></center>
<center><math> \min_{\textbf{b},\textbf{a}} \sum_{i=1}^k || x_u^{(i)} - \sum_{j=1}^s a_j^{(i)}b_j||^2_2 + \beta||a^{(i)}||_1 \,\,\,\,\,(1)</math></center>


subject to the constraints <math>||b_j||_2 \leq 1</math> for all <math>j=1,\dots,s</math>. Note that:
subject to the constraints <math>||b_j||_2 \leq 1</math> for all <math>j=1,\dots,s</math>. Note that:
Line 43: Line 37:
* <math>\,s</math> is the number of bases that are used in the new representation. <math>\,s</math> can be larger than <math>\,n</math>
* <math>\,s</math> is the number of bases that are used in the new representation. <math>\,s</math> can be larger than <math>\,n</math>


The first term in the cost function has the purpose of creating <math>\,b_j</math> and <math>a_j^{(i)}</math> such that each unlabeled point can be approximately expressed as a linear combination of these bases and activations (weights). The <math>\,\beta||a^{(i)}||_1</math> term acts as a penalty to ensure that the activation terms are sparse<ref>Tibshirani, R (1996). Regression shrinkage and selection via the lasso. ''J. R. Stat. Soc. B.'', ''58'', 267-28.</ref> <ref>Ng, A.Y. (2004). Feature selection, L1 vs. L2 regularization, and rotational invariance. ''ICML''.</ref> Using terms other than this $L_1$ regularization lead to non-sparse activation terms which, in turn, reduce the learning performance.  
The first term in the cost function has the purpose of creating <math>\,b_j</math> and <math>a_j^{(i)}</math> such that each unlabeled point can be approximately expressed as a linear combination of these bases and activations (weights). The <math>\,\beta||a^{(i)}||_1</math> term acts as a penalty to ensure that the activation terms are sparse<ref>Tibshirani, R (1996). Regression shrinkage and selection via the lasso. ''J. R. Stat. Soc. B.'', ''58'', 267-28.</ref> <ref>Ng, A.Y. (2004). Feature selection, L1 vs. L2 regularization, and rotational invariance. ''ICML''.</ref> Using terms other than this <math>L_1</math> regularization leads to non-sparse activation terms which, in turn, reduce the learning performance.  


This leads to two sub-problems, each of which is convex in either <math>\,\textbf{a}</math> or <math>\,\textbf{b}</math> when the other one is constant. Since each of these cases can be solved efficiently, a solution to this optimization problem can be found by alternating between optimizing <math>\,\textbf{a}</math> and <math>\,\textbf{b}</math>, while fixing the other.
This leads to two sub-problems, each of which is convex in either <math>\,\textbf{a}</math> or <math>\,\textbf{b}</math> when the other one is constant. Since each of these cases can be solved efficiently, a solution to this optimization problem can be found by alternating between optimizing <math>\,\textbf{a}</math> and <math>\,\textbf{b}</math>, while fixing the other.
Line 52: Line 46:
Once the optimization problem is solved, the set of bases can be used on labeled data (recall that the bases were constructed using unlabeled data). That is, given the bases <math>b_1,\dots,b_s</math>, for each <math>x_{\ell}^{(i)}</math>, we need to find the corresponding weight <math>\hat{a}(x_{\ell}^{(i)})</math>. This is achieved as follows:
Once the optimization problem is solved, the set of bases can be used on labeled data (recall that the bases were constructed using unlabeled data). That is, given the bases <math>b_1,\dots,b_s</math>, for each <math>x_{\ell}^{(i)}</math>, we need to find the corresponding weight <math>\hat{a}(x_{\ell}^{(i)})</math>. This is achieved as follows:


<center><math>\hat{a}(x_{\ell}^{(i)}) = \arg\min_{a^{(i)}} \sum_{i=1}^k || x_{\ell}^{(i)} = \sum_{j=1}^s a_j^{(i)}b_j||^2_2 + \beta||a^{(i)}||_1 \,\,\,\,\,(2)</math></center>
<center><math>\hat{a}(x_{\ell}^{(i)}) - \arg\min_{a^{(i)}} \sum_{i=1}^k || x_{\ell}^{(i)} = \sum_{j=1}^s a_j^{(i)}b_j||^2_2 + \beta||a^{(i)}||_1 \,\,\,\,\,(2)</math></center>


This is also a convex problem with an efficient solution, yielding an approximate representation of our point in the constructed bases. Note that due to the second term, the activation vector is sparse. '''Figure 2''' shows a sample decomposition of images into images that play the role of bases.
This is also a convex problem with an efficient solution, yielding an approximate representation of our point in the constructed bases. Note that due to the second term, the activation vector is sparse. '''Figure 2''' shows a sample decomposition of images into images that play the role of bases.


Once the data is in the new representation, standard supervised methods can be used to classify the data.  
Once the data is in the new representation, standard supervised methods can be used to classify the data.


====A Classifier Specific to Sparse-Coding====
===A Classifier Specific to Sparse-Coding===


In the algorithm described above, once the data is represented in the new bases typical classifiers can be used.  
In the algorithm described above, once the data is represented in the new bases typical classifiers can be used.  
Line 65: Line 59:


Namely, there are two assumptions that can be made:
Namely, there are two assumptions that can be made:
*Assume that <math>\,x</math> has [http://en.wikipedia.org/wiki/Gaussian_noise| Gaussian noise]]<math>\,\eta</math> (i.e. <math>\,\eta \sim N(\mu, \sigma^2)</math> and <math>P(x = \sum_j a_j b_j + \eta|b,a) \propto \exp(-||\eta||^2_2/2\sigma^2)</math>)  
*Assume that <math>\,x</math> has [http://en.wikipedia.org/wiki/Gaussian_noise Gaussian noise]<math>\,\eta</math> (i.e. <math>\,\eta \sim N(0, \sigma^2)</math> and <math>P(x = \sum_j a_j b_j + \eta|b,a) \propto \exp(-||\eta||^2_2/2\sigma^2)</math>)  
*Impose a [http://en.wikipedia.org/wiki/Laplace_distribution| Laplacian][http://en.wikipedia.org/wiki/Prior_probability| prior]on <math>\,\mathbf{a}</math> (typical for sparseness) with <math>P(a) \propto \exp(-\beta \sum_j |a_j|)</math> and then expression (1) is a [http://en.wikipedia.org/wiki/Generative_model| linear generative model] aiming to learn <math>\,\mathbf{b}</math>.
*Impose a [http://en.wikipedia.org/wiki/Laplace_distribution Laplacian][http://en.wikipedia.org/wiki/Prior_probability prior]on <math>\,\mathbf{a}</math> (typical for sparseness) with <math>P(a) \propto \exp(-\beta \sum_j |a_j|)</math> and then expression (1) aims to learn <math>\,\mathbf{b}</math> of a [http://en.wikipedia.org/wiki/Generative_model linear generative model] of <math>\,x</math>




Therefore, a [http://en.wikipedia.org/wiki/Fisher_kernel| Fisher kernel] can be used to measure similarity in the new representation<ref name="Jaakkola">Jaakkola, T., & Haussler, D. (1998). Exploiting generative models in discriminative classifiers. ''NIPS''.</ref>. The algorithm is as follows:
Therefore, a [http://en.wikipedia.org/wiki/Fisher_kernel Fisher kernel] can be used to measure similarity in the new representation<ref name="Jaakkola">Jaakkola, T., & Haussler, D. (1998). Exploiting generative models in discriminative classifiers. ''NIPS''.</ref>. The algorithm is as follows:
* Use (1) and unlabeled data to learn <math>\,\mathbf{b}</math>
* Use (1) and unlabeled data to learn <math>\,\mathbf{b}</math>
* Compute features <math>\hat{\mathbf{a}}(x)</math> by solving (2)
* Compute features <math>\hat{\mathbf{a}}(x)</math> by solving (2)
* Compute kernels, <math>K(x^{(s)},x^{(t)}) = \left(\hat{a}(x^{(s)})^T\hat{a}(x^{(t)})\right) {(r^{(s)}}^Tr^{(t)})</math> where <math>r^{(s)} = x^{(s)} - \sum_j \hat{a}_jb_j</math>, the residual from our estimate in the new representation.
* Compute kernels, <math>K(x^{(s)},x^{(t)}) = \left(\hat{a}(x^{(s)})^T\hat{a}(x^{(t)})\right) {(r^{(s)}}^Tr^{(t)})</math> where <math>r^{(s)} = x^{(s)} - \sum_j \hat{a}_jb_j</math>, the residual from our estimate in the new representation.
* Use a kernel-based classifier like [http://en.wikipedia.org/wiki/Support_vector_machine| SVM] using this new kernel.
* Use a kernel-based classifier like [http://en.wikipedia.org/wiki/Support_vector_machine SVM] using this new kernel.


=Results and Discussion=
==Results and Discussion==


This method of learning applies best to classification of images, text, and sound, so most discussion will focus on classification in this context.
This method of learning applies best to classification of images, text, and sound, so most discussion will focus on classification in this context.


One natural first comparison might be look at this method against using [http://en.wikipedia.org/wiki/Principal_component_analysis| PCA] on the unlabeled data to construct a new bases. PCA differs in two major ways: it looks for linear structure and requires that new bases be orthogonal (thus limiting the number of dimensions in the new space to, at most, the number of dimensions in the original data). So, it is hard for PCA to form bases that represent the basic patterns that underlie the data; however, this is not a problem for sparse-coding.  
A natural first comparison might be look at this method against using [http://en.wikipedia.org/wiki/Principal_component_analysis| PCA] on the unlabeled data to construct a new basis. PCA differs in two major ways: it looks for linear structure and it requires that new bases be orthogonal (thus limiting the number of dimensions in the new space to, at most, the number of dimensions in the original data). So, it is hard for PCA to form bases that represent the basic patterns that underlie the data; however, this is not a problem for sparse-coding.  


In each of the experiments below there are three stages of processing:
In each of the experiments below there are three stages of processing:
*Raw data with no processing
*Raw data with no processing
*Data that has been reduced using PCA
*Data that have been reduced using PCA
*Data in the new representation after sparse-coding was applied to the PCA data.  
*Data in the new representation after sparse-coding was applied to the PCA data.  


Line 101: Line 95:
! Sparse-Coding
! Sparse-Coding
|-
|-
| 100
|  align="center" | 100
| 39.8%   
|  align="center" | 39.8%   
| 25.3%
| align="center" |  25.3%
| 39.7%
| align="center" |  39.7%
|-
|-
| 500
| align="center" |  500
| 54.8%  
|  align="center" | 54.8%  
| 54.8%
|  align="center" | 54.8%
| 58.5%
|  align="center" | 58.5%
|-
|-
| 1000
| align="center" |  1000
| 61.9%  
|  align="center" | 61.9%  
| 64.5%  
| align="center" |  64.5%  
| 65.3%
| align="center" |  65.3%
|-
|-
|}
|}
Line 128: Line 122:
|-
|-
| align="center" | 100
| align="center" | 100
| 8.2%  
| align="center" | 8.2%  
| 5.7%  
| align="center" | 5.7%  
| 7.0%
|  align="center" | 7.0%
| 9.2%
|  align="center" | 9.2%
|-
|-
| 500
|  align="center" | 500
| 17.9%  
| align="center" |  17.9%  
| 14.5%  
| align="center" |  14.5%  
| 16.6%
| align="center" |  16.6%
| 20.2%
| align="center" |  20.2%
|-
|-
| 1000
|  align="center" | 1000
| 25.6%  
| align="center" |  25.6%  
| 23.7%  
| align="center" |  23.7%  
| 23.2%  
|  align="center" | 23.2%  
| 28.3%
|  align="center" | 28.3%
|-
|-
|}
|}

Latest revision as of 09:45, 30 August 2017

Introduction and Motivation

Figure 1: Illustration of classification paradigms. An orange outline indicates that data is labeled. Diagram from <ref name="Raina"/>

Self-taught learning is a new paradigm in machine learning introduced by Stanford researchers in 2007 <ref name="Raina">Raina et al, (2007). "Self-taught Learning: Transfer Learning from Unlabeled Data". Proceedings of the 24th International Conference on Machine Learning, Corvallis, OR.</ref>. The full paper can be found here. It builds on ideas from existing supervised, semi-supervised and transfer learning algorithms. The differences between these methods depend on the usage of labeled and unlabeled data (Figure 1):

  • Supervised Learning - All data is labeled and of the same type (shares the same class labels).
  • Semi-supervised learning - Only some of the data is labeled but it is all of the same class. One drawback is that acquiring unlabeled data of the same class is often difficult and/or expensive.
  • Transfer learning - All data is labeled but some is of another type (i.e. has class labels that do not apply to data set that we wish to classify).

Self-taught learning combines the latter two ideas. It uses labeled data belonging to the desired classes and unlabeled data from other, somehow similar, classes. It is important to emphasize that the unlabeled data need not belong to the class labels we wish to assign, as long as it is related. This fact distinguishes it from semi-supervised learning. Since it uses unlabeled data from new classes, it can be thought of as semi-supervised transfer learning.

The additional unlabeled data can be used to learn a "higher-level feature representation on the inputs" <ref name="Raina"/>. After this representation is found, data can be analyzed and classified in this new space. In other words, the unlabeled data helps with dimension 'reduction' (in reality, the new space is often of higher dimension, but the representation is sparse) and labeled data helps with classification in the new representation.

The main advantage of this approach is that unlabeled similar data is often easier and cheaper to obtain than labeled data belonging to our classes. Additionally, the data from other classes often shares characteristics with data belonging to the desired classes that can be useful in supervised learning. For example, in the context of image classification, unlabeled images can be used to learn edges so that labeled images can be constructed as a linear combination of these base images. Classification of sound and text are also applications that self-taught learning can be applied to.

Self-Taught Learning Algorithm

The self-taught learning algorithm can be summarized as follows:

  1. Use unlabeled data to construct a new representation (typically a sparse high-dimensional one)
  2. Express labeled data in this new representation
  3. Use existing classification methods in this new space

There is much freedom as to how to accomplish each of the three steps. Existing techniques can be used, namely in the classification step. However, there is also a lot of room for new techniques to be developed in this area that are tailored specifically to the idea of self-taught learning.

With that said, a specific algorithm will be discussed for each step to give some idea of how it works.

Problem Specification

Formally, suppose we have [math]\displaystyle{ \,m }[/math] labeled training points [math]\displaystyle{ \{(x_{\ell}^{(i)}, y^{(i)}), i = 1,\dots,m\} }[/math], where [math]\displaystyle{ x_{\ell}^{(i)} \in \mathbb{R}^n }[/math] and [math]\displaystyle{ \,y^{(i)} }[/math] is a class label. Further assume that the data are independently and identically taken from some distribution. We also have a set of unlabeled data [math]\displaystyle{ \{x_u^{(i)},i=1,\dots,k\} }[/math], [math]\displaystyle{ x_u^{i} \in \mathbb{R}^n }[/math]. It is not required that this data comes from the same distribution, which differentiates this method from semi-supervised learning; however, the data should somehow be relevant, as it is with transfer learning.

Constructing Bases Using Unlabeled Data

There are numerous techniques that can potentially be used to construct a new representation of the data. A particular one, the sparse-coding algorithm by Olshausen and Field <ref name="OF1996"> Olshausen, B. A., & Field, D. J.. (1996) "Emergence of simple-cell receptive field properties by learning a sparse code for natural images". Nature, 381,607-609.</ref>, will be discussed here. This method aims to solve the optimization problem

[math]\displaystyle{ \min_{\textbf{b},\textbf{a}} \sum_{i=1}^k || x_u^{(i)} - \sum_{j=1}^s a_j^{(i)}b_j||^2_2 + \beta||a^{(i)}||_1 \,\,\,\,\,(1) }[/math]

subject to the constraints [math]\displaystyle{ ||b_j||_2 \leq 1 }[/math] for all [math]\displaystyle{ j=1,\dots,s }[/math]. Note that:

  • [math]\displaystyle{ \textbf{b} = \{b_1,\dots,b_s\} }[/math] are 'basis vectors', with [math]\displaystyle{ b_j \in \mathbb{R}^n }[/math]
  • [math]\displaystyle{ \textbf{a} = \{a^{(1)},\dots, a^{(k)}\} }[/math] are 'activations' with each [math]\displaystyle{ a^{(i)} \in \mathbb{R}^s }[/math]
  • [math]\displaystyle{ \,s }[/math] is the number of bases that are used in the new representation. [math]\displaystyle{ \,s }[/math] can be larger than [math]\displaystyle{ \,n }[/math]

The first term in the cost function has the purpose of creating [math]\displaystyle{ \,b_j }[/math] and [math]\displaystyle{ a_j^{(i)} }[/math] such that each unlabeled point can be approximately expressed as a linear combination of these bases and activations (weights). The [math]\displaystyle{ \,\beta||a^{(i)}||_1 }[/math] term acts as a penalty to ensure that the activation terms are sparse<ref>Tibshirani, R (1996). Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B., 58, 267-28.</ref> <ref>Ng, A.Y. (2004). Feature selection, L1 vs. L2 regularization, and rotational invariance. ICML.</ref> Using terms other than this [math]\displaystyle{ L_1 }[/math] regularization leads to non-sparse activation terms which, in turn, reduce the learning performance.

This leads to two sub-problems, each of which is convex in either [math]\displaystyle{ \,\textbf{a} }[/math] or [math]\displaystyle{ \,\textbf{b} }[/math] when the other one is constant. Since each of these cases can be solved efficiently, a solution to this optimization problem can be found by alternating between optimizing [math]\displaystyle{ \,\textbf{a} }[/math] and [math]\displaystyle{ \,\textbf{b} }[/math], while fixing the other.

Projecting Labeled Data in New Bases and Classification

Figure 2: This is an example of a patch of an image [math]\displaystyle{ x }[/math] represented as a combination of three (of many) bases vectors. Diagram from <ref name="Raina"/>

Once the optimization problem is solved, the set of bases can be used on labeled data (recall that the bases were constructed using unlabeled data). That is, given the bases [math]\displaystyle{ b_1,\dots,b_s }[/math], for each [math]\displaystyle{ x_{\ell}^{(i)} }[/math], we need to find the corresponding weight [math]\displaystyle{ \hat{a}(x_{\ell}^{(i)}) }[/math]. This is achieved as follows:

[math]\displaystyle{ \hat{a}(x_{\ell}^{(i)}) - \arg\min_{a^{(i)}} \sum_{i=1}^k || x_{\ell}^{(i)} = \sum_{j=1}^s a_j^{(i)}b_j||^2_2 + \beta||a^{(i)}||_1 \,\,\,\,\,(2) }[/math]

This is also a convex problem with an efficient solution, yielding an approximate representation of our point in the constructed bases. Note that due to the second term, the activation vector is sparse. Figure 2 shows a sample decomposition of images into images that play the role of bases.

Once the data is in the new representation, standard supervised methods can be used to classify the data.

A Classifier Specific to Sparse-Coding

In the algorithm described above, once the data is represented in the new bases typical classifiers can be used.

However, it might be advantageous to create a classifier specific to sparse-coding with hopes of improving accuracy. Specifically, there is a natural kernel that can be used to measure similarity given that the data lies in a space constructed with sparse-coding.

Namely, there are two assumptions that can be made:

  • Assume that [math]\displaystyle{ \,x }[/math] has Gaussian noise[math]\displaystyle{ \,\eta }[/math] (i.e. [math]\displaystyle{ \,\eta \sim N(0, \sigma^2) }[/math] and [math]\displaystyle{ P(x = \sum_j a_j b_j + \eta|b,a) \propto \exp(-||\eta||^2_2/2\sigma^2) }[/math])
  • Impose a Laplacianprioron [math]\displaystyle{ \,\mathbf{a} }[/math] (typical for sparseness) with [math]\displaystyle{ P(a) \propto \exp(-\beta \sum_j |a_j|) }[/math] and then expression (1) aims to learn [math]\displaystyle{ \,\mathbf{b} }[/math] of a linear generative model of [math]\displaystyle{ \,x }[/math]


Therefore, a Fisher kernel can be used to measure similarity in the new representation<ref name="Jaakkola">Jaakkola, T., & Haussler, D. (1998). Exploiting generative models in discriminative classifiers. NIPS.</ref>. The algorithm is as follows:

  • Use (1) and unlabeled data to learn [math]\displaystyle{ \,\mathbf{b} }[/math]
  • Compute features [math]\displaystyle{ \hat{\mathbf{a}}(x) }[/math] by solving (2)
  • Compute kernels, [math]\displaystyle{ K(x^{(s)},x^{(t)}) = \left(\hat{a}(x^{(s)})^T\hat{a}(x^{(t)})\right) {(r^{(s)}}^Tr^{(t)}) }[/math] where [math]\displaystyle{ r^{(s)} = x^{(s)} - \sum_j \hat{a}_jb_j }[/math], the residual from our estimate in the new representation.
  • Use a kernel-based classifier like SVM using this new kernel.

Results and Discussion

This method of learning applies best to classification of images, text, and sound, so most discussion will focus on classification in this context.

A natural first comparison might be look at this method against using PCA on the unlabeled data to construct a new basis. PCA differs in two major ways: it looks for linear structure and it requires that new bases be orthogonal (thus limiting the number of dimensions in the new space to, at most, the number of dimensions in the original data). So, it is hard for PCA to form bases that represent the basic patterns that underlie the data; however, this is not a problem for sparse-coding.

In each of the experiments below there are three stages of processing:

  • Raw data with no processing
  • Data that have been reduced using PCA
  • Data in the new representation after sparse-coding was applied to the PCA data.

Additionally, in some data sets, it is helpful to use the original features in addition to the new features.

In each case, both SVM and Gaussian Discriminant Analysis (GDA) were used and the best result was taken.

Recognizing Font Characters

The first experiment examined using handwritten digits and handwritten letters to classify font characters. Table 1 shows the results for digits and Table 2 for handwritten letters.

Table 1

Training Size Raw PCA Sparse-Coding
100 39.8% 25.3% 39.7%
500 54.8% 54.8% 58.5%
1000 61.9% 64.5% 65.3%

Table 2

Training Size Raw PCA Sparse-Coding Raw and Sparse-Coding
100 8.2% 5.7% 7.0% 9.2%
500 17.9% 14.5% 16.6% 20.2%
1000 25.6% 23.7% 23.2% 28.3%

References

<references />