supervised Dictionary Learning: Difference between revisions

From statwiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 74: Line 74:


The extension of all these formulations to multiclass problems is straightforward and can be done using softmax discriminative cost function <math>\mathcal{C}_{i}(x_{1},...,x_{p})=\log(\sum_{j=1}^{p}e^{x_{j}-x_{i}})</math>, which are multiclass versions of the logistic function and by learning one model <math>\mathbf{\theta}_{i}</math> per class.  
The extension of all these formulations to multiclass problems is straightforward and can be done using softmax discriminative cost function <math>\mathcal{C}_{i}(x_{1},...,x_{p})=\log(\sum_{j=1}^{p}e^{x_{j}-x_{i}})</math>, which are multiclass versions of the logistic function and by learning one model <math>\mathbf{\theta}_{i}</math> per class.  
==Optimization Procedure==
In the following, the algorithm for supervised dictionary learning is presented
'''Input:''' ''n'' (signal dimension); <math>(\mathbf{x}_{i},y_{i})_{i=1}^{m}</math> (training signals); ''k'' (size of the dictionary); <math>\lambda_{0}, \lambda_{1}, \lambda_{2}</math> (parameters); <math>0\leq\mu_{1}\leq\mu_{2}\leq...\leq\mu_{m}\leq1</math> (increasing sequence).<br />
'''Output:''' <math>D\in \mathbb{R}^{n\times k}</math> (dictionary); <math>\mathbf{\theta}</math> (parameters).<br />
'''Initialization:''' Set <math>\mathbf{\mathit{D}}</math> to a random Gaussian matrix with normalized columns. Set <math>\mathbf{\theta}</math> to zero.<br />
'''Loop:''' For <math>\mu=\mu_{1},...,\mu_{m},</math> <br />
'''Loop:''' Repeat until convergence (or a fixed number of iterations),<br />
''Supervised sparse coding:'' Solve, for all <math>i=1,..., m</math>
<br /><br />
<center><math> EQUATION 10. \;\;\;(11) </math></center>
<br />
''Dictionary and parameters update:'' Solve
<br /><br />
<center><math> EQUATION 11. \;\;\;(12) </math></center>
<br />
==Experimental Validation


==References==
==References==
<references />
<references />

Revision as of 21:05, 18 November 2010

This paper proposes a novel discriminative formulation for sparse representation of images using learned dictionaries.

Introduction

Sparse models were originated from two different communities under two different names, one by neurologists mainly by the salient work done by Olshausen in <ref name="Olshausen1996">B.A. Olshausen and D.J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, vol. 381, pp. 607-609, 1996.</ref> as sparse coding, and second by researchers in the field of signal processing as independent component analysis (ICA) (for example see <ref name="ICABook">A. Hyvärinen, J. Karhunen, and E. Oja. Independent component analysis. New York: John Wiley and Sons, 2001.</ref> for a comprehensive overview of ICA). Although SC and ICA originated from two different problems (the former as the model of simple cells in visual cortex and the latter as the solution to decompose the independent sources of some mixed signals), they merged, eventually, into similar technique (with somewhat different description).
On the other hand, representation of a signal using a learned dictionary instead of predefined operators (such as wavelets in signal and image processing or local binary patterns (LBP) in texture classification) has led to state-of-the-art results in various applications such as denoising <ref name="Elad2006">M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. IP, vol. 54, no. 12, 2006.</ref> and texture classification <ref name="VZ2009">M. Varma and A. Zisserman. A statistical approach to material classification using image patch exemplars. IEEE Trans. PAMI, vol. 31, no. 11, pp. 2032-2047, 2009.</ref>.
It is well known that sparsity captures higher order statistics of the data. For example, in comparing PCA and ICA, while PCA can only capture up to the second order statistics of the data and hence is appropriate for Gaussian models, ICA can capture higher order statistics of the data. Whitening data is a preprocessing step in ICA and. ICA is hence appropriate for supergaussian models (such as data with Laplacian distributions) <ref name="ICABook"/>.
The previous work in the literature on sparse representation is done on either predefined (fixed) operators or learned dictionaries for reconstructive, discriminative, or generative models in various applications such as signal and face recognition <ref>K. Huang and S. Aviyente. Sparse representation for signal classification. In NIPS, 2006.</ref><ref>J.Wright, A.Y. Yang, A. Ganesh, S. Sastry, and Y. Ma. Robust face recognition via sparse representation. IEEE Trans. PAMI, vol.31, no. 2, pp. 210-227, 2009.</ref><ref>R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning: transfer learning from unlabeled data. In ICML, 2007.</ref><ref>J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Learning discriminative dictionaries for local image analysis. In CVPR, 2008.</ref><ref>M. Ranzato and M. Szummer. Semi-supervised learning of compact document representations with deep networks. In ICML, 2008.</ref>.
In this paper, the authors extend these approaches by proposing a framework for learning simultaneously a single shared dictionary as well as sparse models (for all classes) in a mixed generative and discriminative formulation. Although, this joint generative/discriminative framework have been also reported in probabilistic approaches and in neural networks, but not in sparse dictionary learning.

Sparse Representation and Dictionary Learning

Sparse representation can be exploited in two different ways. First, by representing a signal as a linear combination of predefined bases such as wavelets <ref name="MallatSparseBook">S. Mallat. A wavelet tour of signal processing, the sparse way. Burlington: Academic Press, 3rd ed., 2009.</ref>. Second, by using a dictionary of primitive elements learned from the signal and decomposing the signal into these primitive elements <ref name="Julesz1981">B. Juelsz. Textons, the elements of texture-perception, and their interactions. Nature, vol. 290, pp. 91–97, 1981.</ref><ref name="Olshausen1996"/><ref name="VZ2009"/>. There are two steps in the latter approach, i.e., learning the dictionary and computation of (sparse) coefficients for representing the signal using the elements of dictionary.
Having a fixed dictionary [math]\displaystyle{ D=[d_{1},...,d_{k}]\in \mathbb{R}^{n\times k} }[/math], a signal [math]\displaystyle{ \mathbf{x}\in \mathbb{R}^{n} }[/math] can be reconstructed using sparse coefficients [math]\displaystyle{ \mathbf{\alpha} }[/math] using

[math]\displaystyle{ EQUATION 1, \;\;\;(1) }[/math]


where, [math]\displaystyle{ \ell_{1} }[/math] penalty yields a sparse solution for coefficients [math]\displaystyle{ \mathbf{\alpha} }[/math].
In (1), the dictionary is fixed and the goal is to find the sparse coefficients [math]\displaystyle{ \mathbf{\alpha} }[/math] such that [math]\displaystyle{ \mathbf{x} }[/math] can be reconstructed using the bases in the dictionary. The dictionary can be learned using m training data [math]\displaystyle{ (\mathbf{x}_{i})_{i=1}^{m} }[/math] in [math]\displaystyle{ \mathbb{R}^{n} }[/math]. Hence, (1) can be modified as follows:

[math]\displaystyle{ EQUATION 2. \;\;\;(2) }[/math]


Since the reconstruction errors [math]\displaystyle{ \left \| \mathbf{x}_{i}-D\mathbf{\alpha}_{i} \right \|_{2}^{2} }[/math] in (2) are invariant to scaling dictionary [math]\displaystyle{ \mathbf{\mathit{D}} }[/math] by a scalar and coefficients [math]\displaystyle{ \mathbf{\alpha}_{i} }[/math] by its inverse, the [math]\displaystyle{ \ell_{2} }[/math] norm of the columns of [math]\displaystyle{ \mathbf{\mathit{D}} }[/math] should be constrained <ref name="Elad2006"/>. This reconstructive framework is called REC in this paper.

Supervised Dictionary Learning

In this paper, initially a binary classification task is considered and then the proposed approach is extended to multicalss problems. In two-class problem, using signals [math]\displaystyle{ (\mathbf{x}_{i})_{i=1}^{m} }[/math] and their corresponding binary labels [math]\displaystyle{ (y_{i}\in\left \{ -1,+1 \right \})_{i=1}^{m} }[/math], the dictionary [math]\displaystyle{ \mathbf{\mathit{D}} }[/math] adapted to the classification task and a function [math]\displaystyle{ f }[/math] with positive values for any signal in class 1 and negative values otherwise are learned. Both linear and bilinear models are considered in this paper. In linear (L) model

[math]\displaystyle{ EQUATION for linear model, \;\;\;(3) }[/math]


whereas in bilinear (BL) model

[math]\displaystyle{ EQUATION for bilinear model. \;\;\;(4) }[/math]


The supervised dictionary learning (SDL) can be performed in three different approaches, i.e., reconstructive, generative, and discriminative approaches.

Reconstructive Approach

In reconstructive (REC) approach <ref name="Elad2006"/>, the dictionary [math]\displaystyle{ \mathbf{\mathit{D}} }[/math] and the coefficients [math]\displaystyle{ \mathbf{\alpha}_{i} }[/math] are learned using (2). The parameters [math]\displaystyle{ \mathbf{\theta} }[/math] are learned afterwords by solving

[math]\displaystyle{ EQUATION 3. \;\;\;(5) }[/math]


where [math]\displaystyle{ \mathcal{C} }[/math] is the logistic loss function, i.e., [math]\displaystyle{ \mathcal{C}(x)=log(1+e^{-x}) }[/math] and [math]\displaystyle{ \lambda_{2} }[/math] is a regularization parameter to prevent overfitting.

Generative Approach

The supervised dictionary learning using generative approach (DSL-G) learns jointly [math]\displaystyle{ \mathbf{\mathit{D}} }[/math], [math]\displaystyle{ \mathbf{\theta} }[/math], and [math]\displaystyle{ \mathbf{\alpha} }[/math] by solving

[math]\displaystyle{ EQUATION 4, \;\;\;(6) }[/math]


where [math]\displaystyle{ \lambda_{0} }[/math] controls the importance of the reconstruction term. The classification procedure involves supervised sparse coding

[math]\displaystyle{ EQUATION 6, \;\;\;(7) }[/math]


with

[math]\displaystyle{ EQUATION 5, \;\;\;(8) }[/math]


The learning procedure in (6) minimizes the sum of the costs for the pairs [math]\displaystyle{ (\mathbf{x}_{i},y_{i})_{i=1}^{m} }[/math] and corresponds to a generative model.

Discriminative Approach

Although in (7), the different costs [math]\displaystyle{ \mathcal{S}^{\star }(\mathbf{x},D,\mathbf{\theta} ,y)) }[/math] of a given signal are compared for each class [math]\displaystyle{ y= -1, +1 }[/math], a more discriminative approach is to to make the value of [math]\displaystyle{ \mathcal{S}^{\star }(\mathbf{x},D,\mathbf{\theta} ,-y_{i})) }[/math] greater than [math]\displaystyle{ \mathcal{S}^{\star }(\mathbf{x},D,\mathbf{\theta} ,y_{i})) }[/math], which is the logistic loss function [math]\displaystyle{ \mathcal{C} }[/math]. This leads to

[math]\displaystyle{ EQUATION 7. \;\;\;(9) }[/math]


However, a mixed approach of generative formulation given in (6) and its discriminative version in (9) is easier to be solved. Hence, in this paper a generative /discriminative model is proposed for sparse signal representation and classification from the learned dictionary [math]\displaystyle{ \mathbf{\mathit{D}} }[/math] and model [math]\displaystyle{ \mathbf{\theta} }[/math] as follows

[math]\displaystyle{ EQUATION 8. \;\;\;(10) }[/math]


where [math]\displaystyle{ \mu }[/math] controls the trade-off between reconstruction and discrimination terms.Hereafter, this mixed model is referred to as supervised dictionary learning-discriminative (SDL-D) model. The same as before, constraint is imposed on [math]\displaystyle{ \mathbf{\mathit{D}} }[/math] such that [math]\displaystyle{ \forall j, \left \| \mathbf{d}_{j}\leq 1 \right \|_{2} }[/math].

Multiclass Extension

The extension of all these formulations to multiclass problems is straightforward and can be done using softmax discriminative cost function [math]\displaystyle{ \mathcal{C}_{i}(x_{1},...,x_{p})=\log(\sum_{j=1}^{p}e^{x_{j}-x_{i}}) }[/math], which are multiclass versions of the logistic function and by learning one model [math]\displaystyle{ \mathbf{\theta}_{i} }[/math] per class.

Optimization Procedure

In the following, the algorithm for supervised dictionary learning is presented

Input: n (signal dimension); [math]\displaystyle{ (\mathbf{x}_{i},y_{i})_{i=1}^{m} }[/math] (training signals); k (size of the dictionary); [math]\displaystyle{ \lambda_{0}, \lambda_{1}, \lambda_{2} }[/math] (parameters); [math]\displaystyle{ 0\leq\mu_{1}\leq\mu_{2}\leq...\leq\mu_{m}\leq1 }[/math] (increasing sequence).
Output: [math]\displaystyle{ D\in \mathbb{R}^{n\times k} }[/math] (dictionary); [math]\displaystyle{ \mathbf{\theta} }[/math] (parameters).
Initialization: Set [math]\displaystyle{ \mathbf{\mathit{D}} }[/math] to a random Gaussian matrix with normalized columns. Set [math]\displaystyle{ \mathbf{\theta} }[/math] to zero.
Loop: For [math]\displaystyle{ \mu=\mu_{1},...,\mu_{m}, }[/math]
Loop: Repeat until convergence (or a fixed number of iterations),
Supervised sparse coding: Solve, for all [math]\displaystyle{ i=1,..., m }[/math]

[math]\displaystyle{ EQUATION 10. \;\;\;(11) }[/math]


Dictionary and parameters update: Solve

[math]\displaystyle{ EQUATION 11. \;\;\;(12) }[/math]


==Experimental Validation


References

<references />