uncovering Shared Structures in Multiclass Classification: Difference between revisions
m (→Optimization) |
m (Conversion script moved page Uncovering Shared Structures in Multiclass Classification to uncovering Shared Structures in Multiclass Classification: Converting page titles to lowercase) |
||
(102 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
=Introduction= | =Introduction= | ||
In their paper ''Uncovering Shared Structures in Multiclass Classification'', the authors Amit ''et al.'' wrote about how hidden structure can be utilized to improve the accuracy in multiclass classification. This notion is often called ''learning-to-learn'' or ''interclass transfer'' (Thrun, 1996). | In their paper ''Uncovering Shared Structures in Multiclass Classification'' <ref>Yonatan Amit, Michael Fink, Nathan Srebro, Shimon Ullman. Uncovering Shared Structures in Multiclass Classification. </ref>, the authors Amit ''et al.'' wrote about how hidden structure of some common characteristics among different classes can be utilized to improve the accuracy in multiclass classification. This notion is often called ''learning-to-learn'' or ''interclass transfer'' (Thrun, 1996) <ref> Learning to learn: Introduction. Kluwer Academic Publishers.</ref>. | ||
The uncovering of such hidden structure was accomplished | The uncovering of such hidden structure was accomplished | ||
Line 19: | Line 19: | ||
In this paper, the authors considered a linear classifiers over <math>\,\mathcal{X} = R^{n}</math> that are parametrized by a vector of weights <math>\,W_y \in R^{n}</math> for each class <math>\,y \in \mathcal{Y}</math>. These linear models were of the form: | In this paper, the authors considered a linear classifiers over <math>\,\mathcal{X} = R^{n}</math> that are parametrized by a vector of weights <math>\,W_y \in R^{n}</math> for each class <math>\,y \in \mathcal{Y}</math>. These linear models were of the form: | ||
<center><math>\,H_W(x) = \arg\max_{y \in \mathcal{Y}} W_{y}^{t} x \;\;\; (1)</math></center> | <center><math>\,H_W(x) = \arg\max_{y \in \mathcal{Y}} \;W_{y}^{t} x \;\;\; (1)</math></center> | ||
In the paper, the authors adopted the suggestion of Crammer and Singer (2001) <ref> Crammer, K., & Singer, Y. (2001). On the algorithmic implementation of multiclass kernel-based vector machines. JMLR.</ref>, and learned the weights by minimizing a trade-off between an average empirical loss and the following regularizer: | |||
<math>\underset{y}\Sigma\;||W_y||^2 = ||W||_{F}^2</math>, where <math>\,||W||_F</math> is the Frobenius norm of <math>\,W</math> whose columns are the vectors <math>\,W_y</math>. The loss function suggested by Crammer ''et al.'' is the maximal hinge loss made over all comparisons between the correct class and an incorrect class, and is given as <math>\,l(W;(x,y)) = \underset{y^' \ne y}max[1 + W_{y^'}^tx - W_{y}^tx]_{+}</math>, where <math>\,[z]_{+} = max(0,z)</math>. Then, to find the weights for a trade-off parameter C, the authors used the following learning rule: | |||
<center><math>\underset{W} {\min}\;\; \frac{1}{2} ||W||_{F}^2 + C \sum_{i=1}^{m} l(W;(x_i,y_i)) \;\;\; (2)</math></center> | |||
This formulation reduces to the Support Vector Machine (SVM), for a binary classification problem. For a number of classes, larger than 2, the formulation requires a margin for each pair of classes, and for each training example, penalizes when the margin constraint is violated. This is viewed as a generalization of SVMs. | |||
Since the authors' goal was to obtain a classifier <math>\,W</math> that is better at classifying new instances by being able to extract characteristics that are shared among classes, they modeled each common characteristic <math>\,r</math> as a linear function of the input vectors <math>\,x</math>, denoted <math>\,F_r^{t} (x)</math>. In this way, the activation of each class <math>\,y</math> is a linear function <math>\,G_y^{t}(F^t x)</math> of vectors of common characteristics among classes <math>\,F^t x</math> rather than being a linear function of the input vectors <math>\,x</math>. | Since the authors' goal was to obtain a classifier <math>\,W</math> that is better at classifying new instances by being able to extract characteristics that are shared among classes, they modeled each common characteristic <math>\,r</math> as a linear function of the input vectors <math>\,x</math>, denoted <math>\,F_r^{t} (x)</math>. In this way, the activation of each class <math>\,y</math> is a linear function <math>\,G_y^{t}(F^t x)</math> of vectors of common characteristics among classes <math>\,F^t x</math> rather than being a linear function of the input vectors <math>\,x</math>. | ||
Line 27: | Line 34: | ||
The model in the paper substitutes the weight matrix <math>\,W \in \mathbb{R}^{n\times k}</math> by <math>\,W = FG</math>, where <math>\,F \in \mathbb{R}^{n\times p}</math> is a weight matrix whose columns define the <math>\,p</math> common characteristics among classes, and <math>\,G \in \mathbb{R}^{p\times k}</math> whose columns predict the classes from on the <math>\,p</math> common characteristics, as follows: | The model in the paper substitutes the weight matrix <math>\,W \in \mathbb{R}^{n\times k}</math> by <math>\,W = FG</math>, where <math>\,F \in \mathbb{R}^{n\times p}</math> is a weight matrix whose columns define the <math>\,p</math> common characteristics among classes, and <math>\,G \in \mathbb{R}^{p\times k}</math> whose columns predict the classes from on the <math>\,p</math> common characteristics, as follows: | ||
<center><math>H_{G,F}(x) = \arg\max_{y \in \mathcal{Y}}G_y^{t} (F^t x) = \arg\max_{y \in \mathcal{Y}}(FG)_y^{t} x \;\;\; ( | <center><math>H_{G,F}(x) = \arg\max_{y \in \mathcal{Y}}\;G_y^{t} (F^t x) = \arg\max_{y \in \mathcal{Y}}\;(FG)_y^{t} x \;\;\; (3)</math></center> | ||
Line 33: | Line 40: | ||
The authors' goal was to simultaneously learn the common characteristics among classes (<math>\,F</math>) and the class weights (<math>\,G</math>). In addition to regularizing <math>\,||G||_F</math>, they also <math>\,||F||_F^{2}</math>, which led to the following learning rule | The authors' goal was to simultaneously learn the common characteristics among classes ( <math>\,F</math> ) and the class weights ( <math>\,G</math> ). In addition to regularizing <math>\,||G||_F</math>, they also <math>\,||F||_F^{2}</math>, which led to the following learning rule | ||
<center><math>\underset{F,G}{\min} \;\; \frac{1}{2} ||F||_F^{2} + \frac{1}{2} ||G||_F^{2} + C \sum_{i=1}^{m} l(FG;(x_i,y_i)) \;\;\; (4)</math></center> | |||
The number of characteristics shared by classes ( <math>\,p</math> ) is not limited because, as seen in <math>\,(3)</math>, the regularization uses the norms rather than the dimensionalities of <math>\,F</math> and <math>\,G</math>. | |||
The optimization problem in <math>\,(4)</math> is not convex. However, instead of learning <math>\,F</math> and <math>\,G</math>, the authors considered the trace-norm of <math>\,W</math>, which is <math>\,||W||_{\Sigma} = \underset{FG = W} {\min} \;\; \frac{1}{2} (||F||_F^{2} + ||G||_F^{2}) = \underset{i} \Sigma | \gamma_i |</math>. The trace-norm is a convex function of <math>\,W</math>, and it can be expressed using <math>\,W</math>'s singular values, which are the <math>\,\gamma_i</math>'s. | |||
The authors then re-expressed the optimization problem in <math>\,(4)</math> as a convex optimization problem that finds <math>\,W</math>, as follows: | |||
<center><math>\underset{W}{\min} \;\; ||W||_{\Sigma} + C \sum_{i=1}^{m} l(W;(x_i,y_i)) \;\;\; (5)</math></center> | |||
The optimization problem in <math>\,(5)</math> could then be formulated as a [http://en.wikipedia.org/wiki/Semidefinite_programming semi-definite program](SDP) and be easily solved. | |||
<br> | |||
=Dualization and Kernelization= | =Dualization and Kernelization= | ||
Taking into consideration the advantages of large-margin methods, the authors obtained the dual form and then the kernelized form of <math>\,( | Taking into consideration the advantages of large-margin methods, the authors obtained the dual form and then the kernelized form of <math>\,(5)</math>. | ||
Using Lagrange duality (more detailed available [http://people.rit.edu/jcdicsa/courses/SML/06lagrange.pdf here], they obtained the [http://en.wikipedia.org/wiki/Dual_problem dual] of <math>\,( | Using Lagrange duality (more detailed available [http://people.rit.edu/jcdicsa/courses/SML/06lagrange.pdf here], they obtained the [http://en.wikipedia.org/wiki/Dual_problem dual] of <math>\,(5)</math>, as follows: | ||
<math>\, max \;\; \underset{i} \Sigma (-Q_{i_{y_i}}) \;\;\; s.t. \;\;\; \forall_{i,j \ne y_i} \;\; Q_{ij} \ge 0, \;\;\; \forall_i \;\; (-Q_{i_{y_i}}) = \underset{j \ne y_i} \Sigma Q_{ij} \le c, \;\;\; ||XQ||_2 \le 1 \;\;\; ( | <math>\, max \;\; \underset{i} \Sigma (-Q_{i_{y_i}}) \;\;\; s.t. \;\;\; \forall_{i,j \ne y_i} \;\; Q_{ij} \ge 0, \;\;\; \forall_i \;\; (-Q_{i_{y_i}}) = \underset{j \ne y_i} \Sigma Q_{ij} \le c, \;\;\; ||XQ||_2 \le 1 \;\;\; (6) </math> | ||
In <math>\,( | In <math>\,(6)</math>, <math>\,Q \in \mathbb{R}^{m\times k}</math> is the dual Lagrange variable and <math>\,||XQ||_2</math> is the [http://mathworld.wolfram.com/SpectralNorm.html spectral norm] (the maximal singular value) of <math>\,XQ</math>. <math>\,(5)</math> can also be written as a semi-definite program. | ||
The authors re-expressed the spectral norm constraint in ( | The authors re-expressed the spectral norm constraint in (6), which was <math>\,||XQ||_2 \le 1</math>, as <math>\,||Q^t(X^tX)Q||_2 \le 1.</math>, and re-expressed <math>\,(6)</math> as: | ||
<math>\, max \;\; \underset{i} \Sigma (-Q_{i_{y_i}}) \;\;\; s.t. \;\;\; \forall_{i,j \ne y_i} \;\; Q_{ij} \ge 0, \;\;\; \forall_i \;\; (-Q_{i_{y_i}}) = \underset{j \ne y_i} \Sigma Q_{ij} \le c, \;\;\; ||Q^tKQ||_2 \le 1 \;\;\; ( | <math>\, max \;\; \underset{i} \Sigma (-Q_{i_{y_i}}) \;\;\; s.t. \;\;\; \forall_{i,j \ne y_i} \;\; Q_{ij} \ge 0, \;\;\; \forall_i \;\; (-Q_{i_{y_i}}) = \underset{j \ne y_i} \Sigma Q_{ij} \le c, \;\;\; ||Q^tKQ||_2 \le 1 \;\;\; (7) </math> | ||
In <math>\,( | In <math>\,(7)</math>, <math>\,K = X^tX</math> is the [http://mathworld.wolfram.com/GramMatrix.html Gram matrix]. <math>\,(7)</math> is a convex problem on <math>\,Q</math> that involves a semidefinite constraint (the spectral-norm constraint) on <math>\,Q^tKQ</math> that does not depend on the size of the training set, but rather on the number of classes <math>\,k</math>. | ||
Using <math>\,( | Using <math>\,(7)</math> and Theorem 1 that is given below (the proof is in the paper listed in Reference), the authors were able to | ||
find the optimum weight matrix <math>\,W</math> in terms of the dual optimum <math>\,Q</math>, and they were thus able to use the kernel mechanism for prediction of new instances. | find the optimum weight matrix <math>\,W</math> in terms of the dual optimum <math>\,Q</math>, and they were thus able to use the kernel mechanism for prediction of new instances. | ||
Line 75: | Line 84: | ||
'''Theorem 1''' (Representer Theorem): | '''Theorem 1''' (Representer Theorem): | ||
Let <math>\,Q</math> be the optimum of <math>\,( | Let <math>\,Q</math> be the optimum of <math>\,(7)</math> and <math>\,V</math> be the matrix of eigenvectors of <math>\,Q^'KQ</math>, then for some diagonal <math>\,D \in \mathbb{R}^{k\times k}</math>, the matrix <math>\,W = X(QV^tDV)</math> is an optimum | ||
of <math>\,( | of <math>\,(6)</math>, with <math>\,||W||_{\Sigma} = \Sigma_r \;|D_{rr}|</math>. | ||
By substituting <math>\,W = XQV^tDV</math> into <math>\,(6)</math>, the first term becomes <math>\,\Sigma_r \;|D_{rr}|</math>, while the second term is piecewise linear in <math>\,KQV^tDV</math>. Therefore, we can easily solve a simple [http://en.wikipedia.org/wiki/Linear_programming linear program] (LP) in the <math>\,k</math> unknown entries on the diagonal of <math>\,D</math> to find <math>\,D</math> and hence find <math>\,W</math>. It should be noted that the number of variables in this LP depends only on the number of classes (<math>\,k</math>), and that the entire procedure of solving <math>\,(7)</math> which consists of extracting <math>\,V</math> and then finding <math>\,D</math> and thus <math>\,W</math> uses only the Gram matrix (<math>\,K</math>). | |||
Note that, the following corollary immediately follows from Theorem 1: | |||
'''Corollary 1''': | |||
There exists <math>\,\alpha \in \mathbb{R}^{m\times k}</math> such that <math>\,W = X\alpha</math> is an optimum of <math>\,(5)</math>. | |||
<br> | |||
=Learning a Latent Feature Representation= | =Learning a Latent Feature Representation= | ||
Line 90: | Line 107: | ||
The authors considered <math>\,k</math> binary classification tasks, and used <math>\,W_j</math> as a linear predictor for the <math>\,j </math>th task. Using an SVM to learn each class independently, they came up with the following learning rule: | The authors considered <math>\,k</math> binary classification tasks, and used <math>\,W_j</math> as a linear predictor for the <math>\,j </math>th task. Using an SVM to learn each class independently, they came up with the following learning rule: | ||
<math>\underset{W}{\min} \;\; \underset{j}\Sigma (\frac{1}{2} ||W_i||^2 + | <math>\underset{W}{\min} \;\; \underset{j}\Sigma\; (\frac{1}{2}\; ||W_i||^2 + C\;l_j(W_j)) = \underset{W}{\min}\; \frac{1}{2}\; ||W||_F^2 + C \;\underset{j}\Sigma \;l_j(W_j) \;\;\; (8)</math>, where <math>\,l_j(W_j)</math> is the total (hinge) loss of <math>\,W_j</math> on the training examples for task <math>\,j</math>. | ||
Replacing the Frobenius norm with the trace norm ( <math>\, \underset{W}{\min} ||W||_{\Sigma} + C \underset{j} \Sigma l_j(W_j)</math> ) | Replacing the Frobenius norm with the trace norm ( <math>\, \underset{W}{\min}\; ||W||_{\Sigma} + C\; \underset{j} \Sigma\; l_j(W_j)</math> ) corresponds to learning a feature representation <math>\,\phi(x) = F^tx</math> that allows good, low-norm prediction for all <math>\,k</math> tasks. After such a feature representation is learned, a new task can be learned directly using the feature vectors | ||
corresponds to learning a feature representation <math>\,\phi(x) = F^tx</math> that allows good, low-norm prediction for all <math>\,k</math> tasks. After such a feature representation is learned, a new task can be learned directly using the feature vectors | |||
<math>\,F^tx</math> and standard SVM whilst taking advantage of the transferred knowledge from the other, previously-learned, | <math>\,F^tx</math> and standard SVM whilst taking advantage of the transferred knowledge from the other, previously-learned, | ||
tasks. It should be noted that we can learn a feature representation <math>\,\phi(x) = F^tx</math> even if we do not have the feature | tasks. It should be noted that we can learn a feature representation <math>\,\phi(x) = F^tx</math> even if we do not have the feature | ||
Line 100: | Line 116: | ||
Using Corollary 1 given above, the goal of the authors was to obtain a matrix <math>\,\alpha</math> such that <math>\,W = X \alpha</math> is an optimum of <math>\,(5)</math>. | |||
Let <math>\,W = UDV</math> be the [http://en.wikipedia.org/wiki/Singular_value_decomposition singular value decomposition] of | Let <math>\,W = UDV</math> be the [http://en.wikipedia.org/wiki/Singular_value_decomposition singular value decomposition] of | ||
<math>\,W</math>. Then, <math>F = U \sqrt{D} </math> is an optimum of <math>\,( | <math>\,W</math>. Then, <math>F = U \sqrt{D} </math> is an optimum of <math>\,(4)</math>. The singular value decomposition of <math>\,\alpha^tK\alpha = \alpha^tX^tX\alpha = W^tW = V^tD^2V</math> can then be calculated to obtain <math>\,D</math> and <math>\,V</math>. | ||
The result is <math>\,D^{−1/2}V\alpha^tK = D^{−1/2}V(\alpha^tX^t)X = D^{−1/2}VW^tX = D^{−1/2}VV^tDU^tX = D^{1/2}U^tX = F^tX</math>, which provides an explicit representation of the learned feature space that can be calculated from <math>\,K</math> and <math>\,\alpha</math> alone. | The result is <math>\,D^{−1/2}V\alpha^tK = D^{−1/2}V(\alpha^tX^t)X = D^{−1/2}VW^tX = D^{−1/2}VV^tDU^tX = D^{1/2}U^tX = F^tX</math>, which provides an explicit representation of the learned feature space that can be calculated from <math>\,K</math> and <math>\,\alpha</math> alone. | ||
<br> | |||
=Optimization= | =Optimization= | ||
The optimization problem <math>\,( | The optimization problem <math>\,(5)</math> can be formulated as a semi-definite program (SDP) and easily solved using off-the-shelf SDP solvers to find the optimal <math>\,W</math>. However, such solvers are typically based on [http://en.wikipedia.org/wiki/Interior_point_method interior point methods] and so they scale poorly with the size of the problem. As a result, the authors | ||
optimized <math>\,( | optimized <math>\,(5)</math> using using simple though powerful [http://en.wikipedia.org/wiki/Gradient_descent gradient-based methods]. | ||
Because <math>\,(5)</math> is non-differentiable, gradient-based cannot be directly applied to it. The authors therefore considered a | |||
smoothed approximation to <math>\,(5)</math>. This was done by replacing the trace-norm of <math>\,W</math> ( given above as <math>\,||W||_{\Sigma} = \underset{FG = W} {\min} \;\; \frac{1}{2} (||F||_F^{2} + ||G||_F^{2}) = \underset{i} \Sigma\; | \gamma_i |</math> ) with a smooth proxy, and this was in turn done by replacing the non-smooth absolute value in this trace norm with a smooth function <math>\,g</math> that was defined as: | |||
<math>\,g(\gamma) = \left\{\begin{matrix} {\frac {\gamma^{2}}{2r} + \frac {r}{2}} &\text{, if } \gamma \le r \\ |\gamma| &\text{, if otherwise } \end{matrix}\right.</math> | |||
In <math>\,g(\gamma)</math>, <math>\,r</math> is some predefined cutoff point. It is easy to see that <math>\,g</math> is continuously differentiable. | |||
Using <math>\,g</math>, the authors replaced the trace norm <math>\,||W||_{\Sigma} = \underset{i} \Sigma \; | \gamma_i |</math> by <math>\,||W||_{S} = \underset{i} \Sigma \; g(\gamma_i)</math>, where the <math>\,\gamma_i</math>'s are the singular values of <math>\,W</math>. | |||
Ultimately, the authors re-expressed <math>\,(5)</math> with the following convex and continuously-differentiable function: | |||
<center><math>\underset{W}{\min} \;\; ||W||_{S} + C \sum_{i=1}^{m} l_{S}(W;(x_i,y_i)) \;\;\; (9)</math></center> | |||
In the following figures (taken from the authors' paper given in Reference), the left one illustrates how well the matrix <math>\,W</math> found by solving <math>\,(9)</math> using conjugate gradient descent approximates the matrix <math>\,W</math> found by solving <math>\,(5)</math> using an interior point SDP solver (SDP3), and the right one illustrates how much better the gradient-based optimization in <math>\,(9)</math> is compared to the semi-definite programming optimization problem <math>\,(5)</math> in terms of scaling to an increasing number of training instances. | |||
[[File:ps1f1.jpg]] | |||
As mentioned above, a kernelized gradient-based optimization approach can be used to solve <math>\,(9)</math> if we do not have the feature vectors <math>\,X</math> and we only have the Gram matrix <math>\,K = X^tX</math>. Using Corollary 1 given above, we know the optimum of <math>\,(5)</math> has the form <math>\,W = X\alpha</math>, and so we can substitute <math>\,W = X\alpha</math> into <math>\,(9)</math> and then minimize over <math>\,\alpha</math>. | |||
To obtain this kernelized gradient-based approach, the authors let <math>\,X\alpha = UDV</math> denote the SVD of <math>\,X\alpha</math>, and then obtained the SVD of <math>\,\alpha^tK\alpha</math> as <math>\,V^tD^2V</math>. They thus recovered <math>\,D</math> from the SVD of <math>\,α^tKα</math>, and used <math>\,||W||_{S} = \underset{i} \Sigma \; g(\gamma_i)</math> to find <math>\,||X\alpha||_S</math>. | |||
The authors computed (details of the steps are given in the paper in Reference) the gradient of <math>\,||X\alpha||_S</math> with respect to <math>\,\alpha</math> as: | |||
<center><math>\, \frac{\delta ||X \alpha ||_S}{\delta \alpha} = K \alpha V^tD^{-1}g^{'}(D)V \;\;\; (10)</math></center> | |||
Since both <math>\,V</math> and <math>\,D</math> can be found from the SVD of <math>\,\alpha^tK\alpha</math>, one can use <math>\,(10)</math> to find this gradient in terms of <math>\,K</math> and <math>\,\alpha</math>, and so gradient-based optimizations | |||
can be efficiently applied to a kernel <math>\,k(x, x^{'})</math>. | |||
<br> | |||
=Spectral Properties of Trace Norm Regularization= | |||
When the authors minimized <math>\,||F||_{F}^{2} + ||G||_{F}^{2}</math> instead of <math>\,||W||_{F}^{2}</math>, they imposed a regularization preference for an <math>\,L_1</math> norm, instead of an <math>\,L_2</math> norm, on the spectrum of <math>\,W</math>. | |||
When the various target classes share common characteristics, it should be expected that the spectrum of <math>\,W</math> would not be uniform, since a large portion of this spectrum must be concentrated on a few eigenvalues. The <math>\,L_2</math> spectrum regularization due to the Frobenius norm tends to attenuate the spectrum of <math>\,W</math>, and it is therefore not suitable to be used. On the other hand, the <math>\,L_1</math> spectrum regularization due to the trace norm does not have this tendency, and it is therefore much better suited for preserving the underlying structures of characteristics that are shared between the target classes. | |||
The authors performed an experiment (details can be found in their paper listed in Reference), in which they found two matrices <math>\,W_F</math> and <math>\,W_{\Sigma}</math> using the Frobenius norm regularization ( <math>\,(2)</math> given above ) and the trace norm regularization ( <math>\,(5)</math> given above ). Using 500 test instances, the authors found that the generalization error for <math>\,W_F</math>, which was <math>\,47 ;\ %</math>, was much higher than that for <math>\,W_{\Sigma}</math>, which was <math>\,31 ;\ %</math>. | |||
Next, the authors used singular value decomposition to reduce the dimensionality of data before comparing the Frobenius norm regularization and the trace norm regularization. They found that, not only did any SVD dimensionality reduction reduce the test performances of both regularization schemes, the generalization error for the reduced <math>\,W_F</math> was consistently worse than that for the reduced <math>\,W_{\Sigma}</math>. They thus concluded that "post-hoc dimensionality reduction could not attenuate the importance of finding the underlying structure as an integral part of the learning procedure"[1]. However, personally, I inquire whether other dimensionality-reduction algorithms such as [http://en.wikipedia.org/wiki/Multidimensional_scaling multidimensional scaling] and [http://en.wikipedia.org/wiki/Semidefinite_embedding semidefinite embedding] could improve rather than deteriorate the test performances of both regularization schemes? | |||
<br> | |||
= Experiments = | |||
'''Experiment I: Letter Recognition''' | |||
In their first experiment, the authors studied performance on the recognition of the 26 characters from [http://archive.ics.uci.edu/ml/ UCI]'s ''letter'' dataset. They found the two matrices <math>\,W_F</math> that resulted from the Frobenius norm regularization ( <math>\,(2)</math> given above ) and <math>\,W_{\Sigma}</math> that resulted from the trace norm regularization ( <math>\,(5)</math> given above ) | |||
They exhaustively searched over 15 values between <math>\,2^{−9}</math> and <math>\,2^5</math> to determine the value for the trade-off parameter <math>\,C</math>. | |||
Performance was evaluated using 500 test instances. The generalization errors for <math>\,W_F</math> and <math>\,W_{\Sigma}</math> were found to be <math>\,10.1 \;%</math> and <math>\,8.7\;%</math>, respectively. | |||
More details regarding this experiment can be found in the authors' paper listed in Reference. | |||
'''Experiment II: Mammal Recognition''' | |||
In their second experiment, the authors studied performance on the recognition of mammal images. They chose the 72 mammals | |||
that have at least 12 profile instances from the mammal benchmark made available by Fink and Ullman (2007). They used | |||
approximately 1,000 images for training and a similar number of images for testing. The authors were confident that the target classes of the mammal images shared underlying common characteristics. This was because the mammals in these images could be grouped into four genetically-related families, which are deer, canines, felines, and rodents. These four families of mammals are shown in the following figure (taken from the paper listed in Reference): | |||
[[File:ps1f2.jpg]] | |||
The authors found the two matrices <math>\,W_F</math> that resulted from the Frobenius norm regularization ( <math>\,(2)</math> given above ) and <math>\,W_{\Sigma}</math> that resulted from the trace norm regularization ( <math>\,(5)</math> given above ). They determined the value for the trade-off parameter <math>\,C</math> using the same procedure from their first experiment. They found that the accuracy of the multiclass SVM resulting from the trace norm regularization, which was <math>\,33 \; %</math>, was higher than that resulting from the Frobenius norm regularization, which was <math>\, 29 \; %</math>. | |||
Theoretically, as discussed above, learning <math>\,F</math> effectively learns a latent feature space <math>\,F^tX</math>, and, since <math>\,F</math> is learned jointly over all classes, learning <math>\,F</math> effectively transfers knowledge between the classes. | |||
The implication is that a new class can be obtained from a few training instances and thus, theoretically, classes that have fewer training instances tend to benefit more from using the trace norm regularization instead of the Frobenius norm regularization. The authors obtained the following figure (taken from the paper listed in Reference) that shows the advantage in terms of how much the classification accuracy improves by using the trace norm regularization instead of the Frobenius norm regularization as a function of training instances: | |||
[[File:ps1f3.jpg]] | |||
As a concrete example, the authors looked at one of the most frequent classes, the wombats, which contains 30 training instances. They repeatedly relearned <math>\,W_F</math> and <math>W_{\Sigma}</math> as they reduced the number of wombat examples to 24, then to 18, and lastly to 12. When all 30 wombat instances were used, the classification accuracy from using the Frobenius norm regularization was higher than that from using the trace norm regularization by <math>\,2.2 \; %</math>. When the number of wombat instances was reduced to 24, the accuracy gain from using the Frobenius norm regularization was down to <math>\,1.2 \; %</math>. When the number of wombat instances was reduced to 18, however, the classification accuracy from using the trace norm regularization was higher than that from using the Frobenius norm regularization by <math>\,1.4 \; %</math>. When the number of wombat instances was reduced to 12, the accuracy gain from using the trace norm regularization was up to <math>\,3.7 \; %</math>. | |||
<br> | |||
=Conclusion= | |||
As a conclusive remark, I cite the following line from the authors Amit ''et al.'''s paper: | |||
''In this paper we suggested an efficient method to extract the underlying structures that characterize a set of target classes. We believe that this approach is part of a trend that emphasizes the importance of sharing representational knowledge in order to enable large scale classification.'' | |||
<br> | |||
=Reference= | |||
<references /> |
Latest revision as of 08:45, 30 August 2017
Introduction
In their paper Uncovering Shared Structures in Multiclass Classification <ref>Yonatan Amit, Michael Fink, Nathan Srebro, Shimon Ullman. Uncovering Shared Structures in Multiclass Classification. </ref>, the authors Amit et al. wrote about how hidden structure of some common characteristics among different classes can be utilized to improve the accuracy in multiclass classification. This notion is often called learning-to-learn or interclass transfer (Thrun, 1996) <ref> Learning to learn: Introduction. Kluwer Academic Publishers.</ref>.
The uncovering of such hidden structure was accomplished by a mechanism that learns the underlying characteristics that are shared between the target classes. The benefits of finding such common characteristics was demonstrated in the context of large margin multiclass linear classifiers.
Accurate classification of an instance when there is a large number of target classes is a major challenge in many areas such as object recognition, face identification, textual topic classification, and phoneme recognition.
Usually, when there is a large number of classes, the classes are strongly related to each other and have some underlying common characteristics. As a simple example, even though there are many different kinds of mammals, many of them share common characteristics such as having a striped texture. If such true underlying characteristics that are common to the many different classes can be found, then the effective complexity of the multiclass problem can be significantly reduced.
Simultaneously learning the underlying structure shared by classes is a challenging optimization task. The usual goal of past heuristics was to extract powerful non-linear hidden characteristics. However, this goal was usually not convex, and it often resulted in local minima. In this paper, the authors instead modeled the shared characteristics between classes as linear transformations of the input space, so their model was a linear mapping of the shared features of classes that was followed by a multiclass linear classifier. Their model was not only learned in a convex manner, it also significantly improved the accuracy of multiclass linear classifiers.
Formulation
The ultimate goal of multiclass classification is to learn a mapping [math]\displaystyle{ \,H : \mathcal{X} \mapsto \mathcal{Y} }[/math] from instances in [math]\displaystyle{ \,\mathcal{X} }[/math] to labels in [math]\displaystyle{ \,\mathcal{Y} = \{1, \dots , k\} }[/math].
In this paper, the authors considered a linear classifiers over [math]\displaystyle{ \,\mathcal{X} = R^{n} }[/math] that are parametrized by a vector of weights [math]\displaystyle{ \,W_y \in R^{n} }[/math] for each class [math]\displaystyle{ \,y \in \mathcal{Y} }[/math]. These linear models were of the form:
In the paper, the authors adopted the suggestion of Crammer and Singer (2001) <ref> Crammer, K., & Singer, Y. (2001). On the algorithmic implementation of multiclass kernel-based vector machines. JMLR.</ref>, and learned the weights by minimizing a trade-off between an average empirical loss and the following regularizer:
[math]\displaystyle{ \underset{y}\Sigma\;||W_y||^2 = ||W||_{F}^2 }[/math], where [math]\displaystyle{ \,||W||_F }[/math] is the Frobenius norm of [math]\displaystyle{ \,W }[/math] whose columns are the vectors [math]\displaystyle{ \,W_y }[/math]. The loss function suggested by Crammer et al. is the maximal hinge loss made over all comparisons between the correct class and an incorrect class, and is given as [math]\displaystyle{ \,l(W;(x,y)) = \underset{y^' \ne y}max[1 + W_{y^'}^tx - W_{y}^tx]_{+} }[/math], where [math]\displaystyle{ \,[z]_{+} = max(0,z) }[/math]. Then, to find the weights for a trade-off parameter C, the authors used the following learning rule:
This formulation reduces to the Support Vector Machine (SVM), for a binary classification problem. For a number of classes, larger than 2, the formulation requires a margin for each pair of classes, and for each training example, penalizes when the margin constraint is violated. This is viewed as a generalization of SVMs.
Since the authors' goal was to obtain a classifier [math]\displaystyle{ \,W }[/math] that is better at classifying new instances by being able to extract characteristics that are shared among classes, they modeled each common characteristic [math]\displaystyle{ \,r }[/math] as a linear function of the input vectors [math]\displaystyle{ \,x }[/math], denoted [math]\displaystyle{ \,F_r^{t} (x) }[/math]. In this way, the activation of each class [math]\displaystyle{ \,y }[/math] is a linear function [math]\displaystyle{ \,G_y^{t}(F^t x) }[/math] of vectors of common characteristics among classes [math]\displaystyle{ \,F^t x }[/math] rather than being a linear function of the input vectors [math]\displaystyle{ \,x }[/math].
The model in the paper substitutes the weight matrix [math]\displaystyle{ \,W \in \mathbb{R}^{n\times k} }[/math] by [math]\displaystyle{ \,W = FG }[/math], where [math]\displaystyle{ \,F \in \mathbb{R}^{n\times p} }[/math] is a weight matrix whose columns define the [math]\displaystyle{ \,p }[/math] common characteristics among classes, and [math]\displaystyle{ \,G \in \mathbb{R}^{p\times k} }[/math] whose columns predict the classes from on the [math]\displaystyle{ \,p }[/math] common characteristics, as follows:
The authors have showed that regularizing the decomposition [math]\displaystyle{ \,FG }[/math] instead of the Frobenius norm of the weight matrix [math]\displaystyle{ \,W }[/math] can make the resulting model better at generalizing to new instances.
The authors' goal was to simultaneously learn the common characteristics among classes ( [math]\displaystyle{ \,F }[/math] ) and the class weights ( [math]\displaystyle{ \,G }[/math] ). In addition to regularizing [math]\displaystyle{ \,||G||_F }[/math], they also [math]\displaystyle{ \,||F||_F^{2} }[/math], which led to the following learning rule
The number of characteristics shared by classes ( [math]\displaystyle{ \,p }[/math] ) is not limited because, as seen in [math]\displaystyle{ \,(3) }[/math], the regularization uses the norms rather than the dimensionalities of [math]\displaystyle{ \,F }[/math] and [math]\displaystyle{ \,G }[/math].
The optimization problem in [math]\displaystyle{ \,(4) }[/math] is not convex. However, instead of learning [math]\displaystyle{ \,F }[/math] and [math]\displaystyle{ \,G }[/math], the authors considered the trace-norm of [math]\displaystyle{ \,W }[/math], which is [math]\displaystyle{ \,||W||_{\Sigma} = \underset{FG = W} {\min} \;\; \frac{1}{2} (||F||_F^{2} + ||G||_F^{2}) = \underset{i} \Sigma | \gamma_i | }[/math]. The trace-norm is a convex function of [math]\displaystyle{ \,W }[/math], and it can be expressed using [math]\displaystyle{ \,W }[/math]'s singular values, which are the [math]\displaystyle{ \,\gamma_i }[/math]'s.
The authors then re-expressed the optimization problem in [math]\displaystyle{ \,(4) }[/math] as a convex optimization problem that finds [math]\displaystyle{ \,W }[/math], as follows:
The optimization problem in [math]\displaystyle{ \,(5) }[/math] could then be formulated as a semi-definite program(SDP) and be easily solved.
Dualization and Kernelization
Taking into consideration the advantages of large-margin methods, the authors obtained the dual form and then the kernelized form of [math]\displaystyle{ \,(5) }[/math].
Using Lagrange duality (more detailed available here, they obtained the dual of [math]\displaystyle{ \,(5) }[/math], as follows:
[math]\displaystyle{ \, max \;\; \underset{i} \Sigma (-Q_{i_{y_i}}) \;\;\; s.t. \;\;\; \forall_{i,j \ne y_i} \;\; Q_{ij} \ge 0, \;\;\; \forall_i \;\; (-Q_{i_{y_i}}) = \underset{j \ne y_i} \Sigma Q_{ij} \le c, \;\;\; ||XQ||_2 \le 1 \;\;\; (6) }[/math]
In [math]\displaystyle{ \,(6) }[/math], [math]\displaystyle{ \,Q \in \mathbb{R}^{m\times k} }[/math] is the dual Lagrange variable and [math]\displaystyle{ \,||XQ||_2 }[/math] is the spectral norm (the maximal singular value) of [math]\displaystyle{ \,XQ }[/math]. [math]\displaystyle{ \,(5) }[/math] can also be written as a semi-definite program.
The authors re-expressed the spectral norm constraint in (6), which was [math]\displaystyle{ \,||XQ||_2 \le 1 }[/math], as [math]\displaystyle{ \,||Q^t(X^tX)Q||_2 \le 1. }[/math], and re-expressed [math]\displaystyle{ \,(6) }[/math] as:
[math]\displaystyle{ \, max \;\; \underset{i} \Sigma (-Q_{i_{y_i}}) \;\;\; s.t. \;\;\; \forall_{i,j \ne y_i} \;\; Q_{ij} \ge 0, \;\;\; \forall_i \;\; (-Q_{i_{y_i}}) = \underset{j \ne y_i} \Sigma Q_{ij} \le c, \;\;\; ||Q^tKQ||_2 \le 1 \;\;\; (7) }[/math]
In [math]\displaystyle{ \,(7) }[/math], [math]\displaystyle{ \,K = X^tX }[/math] is the Gram matrix. [math]\displaystyle{ \,(7) }[/math] is a convex problem on [math]\displaystyle{ \,Q }[/math] that involves a semidefinite constraint (the spectral-norm constraint) on [math]\displaystyle{ \,Q^tKQ }[/math] that does not depend on the size of the training set, but rather on the number of classes [math]\displaystyle{ \,k }[/math].
Using [math]\displaystyle{ \,(7) }[/math] and Theorem 1 that is given below (the proof is in the paper listed in Reference), the authors were able to
find the optimum weight matrix [math]\displaystyle{ \,W }[/math] in terms of the dual optimum [math]\displaystyle{ \,Q }[/math], and they were thus able to use the kernel mechanism for prediction of new instances.
Theorem 1 (Representer Theorem):
Let [math]\displaystyle{ \,Q }[/math] be the optimum of [math]\displaystyle{ \,(7) }[/math] and [math]\displaystyle{ \,V }[/math] be the matrix of eigenvectors of [math]\displaystyle{ \,Q^'KQ }[/math], then for some diagonal [math]\displaystyle{ \,D \in \mathbb{R}^{k\times k} }[/math], the matrix [math]\displaystyle{ \,W = X(QV^tDV) }[/math] is an optimum of [math]\displaystyle{ \,(6) }[/math], with [math]\displaystyle{ \,||W||_{\Sigma} = \Sigma_r \;|D_{rr}| }[/math].
By substituting [math]\displaystyle{ \,W = XQV^tDV }[/math] into [math]\displaystyle{ \,(6) }[/math], the first term becomes [math]\displaystyle{ \,\Sigma_r \;|D_{rr}| }[/math], while the second term is piecewise linear in [math]\displaystyle{ \,KQV^tDV }[/math]. Therefore, we can easily solve a simple linear program (LP) in the [math]\displaystyle{ \,k }[/math] unknown entries on the diagonal of [math]\displaystyle{ \,D }[/math] to find [math]\displaystyle{ \,D }[/math] and hence find [math]\displaystyle{ \,W }[/math]. It should be noted that the number of variables in this LP depends only on the number of classes ([math]\displaystyle{ \,k }[/math]), and that the entire procedure of solving [math]\displaystyle{ \,(7) }[/math] which consists of extracting [math]\displaystyle{ \,V }[/math] and then finding [math]\displaystyle{ \,D }[/math] and thus [math]\displaystyle{ \,W }[/math] uses only the Gram matrix ([math]\displaystyle{ \,K }[/math]).
Note that, the following corollary immediately follows from Theorem 1:
Corollary 1:
There exists [math]\displaystyle{ \,\alpha \in \mathbb{R}^{m\times k} }[/math] such that [math]\displaystyle{ \,W = X\alpha }[/math] is an optimum of [math]\displaystyle{ \,(5) }[/math].
Learning a Latent Feature Representation
As mentioned above, learning [math]\displaystyle{ \,F }[/math] can be interpreted as learning a latent feature space [math]\displaystyle{ \,F^tX }[/math], which is useful for prediction. Because [math]\displaystyle{ \,F }[/math] is learned jointly over all classes, it effectively transfers knowledge between the classes.
The authors considered [math]\displaystyle{ \,k }[/math] binary classification tasks, and used [math]\displaystyle{ \,W_j }[/math] as a linear predictor for the [math]\displaystyle{ \,j }[/math]th task. Using an SVM to learn each class independently, they came up with the following learning rule:
[math]\displaystyle{ \underset{W}{\min} \;\; \underset{j}\Sigma\; (\frac{1}{2}\; ||W_i||^2 + C\;l_j(W_j)) = \underset{W}{\min}\; \frac{1}{2}\; ||W||_F^2 + C \;\underset{j}\Sigma \;l_j(W_j) \;\;\; (8) }[/math], where [math]\displaystyle{ \,l_j(W_j) }[/math] is the total (hinge) loss of [math]\displaystyle{ \,W_j }[/math] on the training examples for task [math]\displaystyle{ \,j }[/math].
Replacing the Frobenius norm with the trace norm ( [math]\displaystyle{ \, \underset{W}{\min}\; ||W||_{\Sigma} + C\; \underset{j} \Sigma\; l_j(W_j) }[/math] ) corresponds to learning a feature representation [math]\displaystyle{ \,\phi(x) = F^tx }[/math] that allows good, low-norm prediction for all [math]\displaystyle{ \,k }[/math] tasks. After such a feature representation is learned, a new task can be learned directly using the feature vectors
[math]\displaystyle{ \,F^tx }[/math] and standard SVM whilst taking advantage of the transferred knowledge from the other, previously-learned,
tasks. It should be noted that we can learn a feature representation [math]\displaystyle{ \,\phi(x) = F^tx }[/math] even if we do not have the feature
representation [math]\displaystyle{ \,X }[/math]. This is because we only use a kernel [math]\displaystyle{ \,k }[/math] to obtain the Gram matrix [math]\displaystyle{ \,K = X^tX }[/math].
Using Corollary 1 given above, the goal of the authors was to obtain a matrix [math]\displaystyle{ \,\alpha }[/math] such that [math]\displaystyle{ \,W = X \alpha }[/math] is an optimum of [math]\displaystyle{ \,(5) }[/math].
Let [math]\displaystyle{ \,W = UDV }[/math] be the singular value decomposition of
[math]\displaystyle{ \,W }[/math]. Then, [math]\displaystyle{ F = U \sqrt{D} }[/math] is an optimum of [math]\displaystyle{ \,(4) }[/math]. The singular value decomposition of [math]\displaystyle{ \,\alpha^tK\alpha = \alpha^tX^tX\alpha = W^tW = V^tD^2V }[/math] can then be calculated to obtain [math]\displaystyle{ \,D }[/math] and [math]\displaystyle{ \,V }[/math].
The result is [math]\displaystyle{ \,D^{−1/2}V\alpha^tK = D^{−1/2}V(\alpha^tX^t)X = D^{−1/2}VW^tX = D^{−1/2}VV^tDU^tX = D^{1/2}U^tX = F^tX }[/math], which provides an explicit representation of the learned feature space that can be calculated from [math]\displaystyle{ \,K }[/math] and [math]\displaystyle{ \,\alpha }[/math] alone.
Optimization
The optimization problem [math]\displaystyle{ \,(5) }[/math] can be formulated as a semi-definite program (SDP) and easily solved using off-the-shelf SDP solvers to find the optimal [math]\displaystyle{ \,W }[/math]. However, such solvers are typically based on interior point methods and so they scale poorly with the size of the problem. As a result, the authors optimized [math]\displaystyle{ \,(5) }[/math] using using simple though powerful gradient-based methods.
Because [math]\displaystyle{ \,(5) }[/math] is non-differentiable, gradient-based cannot be directly applied to it. The authors therefore considered a
smoothed approximation to [math]\displaystyle{ \,(5) }[/math]. This was done by replacing the trace-norm of [math]\displaystyle{ \,W }[/math] ( given above as [math]\displaystyle{ \,||W||_{\Sigma} = \underset{FG = W} {\min} \;\; \frac{1}{2} (||F||_F^{2} + ||G||_F^{2}) = \underset{i} \Sigma\; | \gamma_i | }[/math] ) with a smooth proxy, and this was in turn done by replacing the non-smooth absolute value in this trace norm with a smooth function [math]\displaystyle{ \,g }[/math] that was defined as:
[math]\displaystyle{ \,g(\gamma) = \left\{\begin{matrix} {\frac {\gamma^{2}}{2r} + \frac {r}{2}} &\text{, if } \gamma \le r \\ |\gamma| &\text{, if otherwise } \end{matrix}\right. }[/math]
In [math]\displaystyle{ \,g(\gamma) }[/math], [math]\displaystyle{ \,r }[/math] is some predefined cutoff point. It is easy to see that [math]\displaystyle{ \,g }[/math] is continuously differentiable.
Using [math]\displaystyle{ \,g }[/math], the authors replaced the trace norm [math]\displaystyle{ \,||W||_{\Sigma} = \underset{i} \Sigma \; | \gamma_i | }[/math] by [math]\displaystyle{ \,||W||_{S} = \underset{i} \Sigma \; g(\gamma_i) }[/math], where the [math]\displaystyle{ \,\gamma_i }[/math]'s are the singular values of [math]\displaystyle{ \,W }[/math].
Ultimately, the authors re-expressed [math]\displaystyle{ \,(5) }[/math] with the following convex and continuously-differentiable function:
In the following figures (taken from the authors' paper given in Reference), the left one illustrates how well the matrix [math]\displaystyle{ \,W }[/math] found by solving [math]\displaystyle{ \,(9) }[/math] using conjugate gradient descent approximates the matrix [math]\displaystyle{ \,W }[/math] found by solving [math]\displaystyle{ \,(5) }[/math] using an interior point SDP solver (SDP3), and the right one illustrates how much better the gradient-based optimization in [math]\displaystyle{ \,(9) }[/math] is compared to the semi-definite programming optimization problem [math]\displaystyle{ \,(5) }[/math] in terms of scaling to an increasing number of training instances.
As mentioned above, a kernelized gradient-based optimization approach can be used to solve [math]\displaystyle{ \,(9) }[/math] if we do not have the feature vectors [math]\displaystyle{ \,X }[/math] and we only have the Gram matrix [math]\displaystyle{ \,K = X^tX }[/math]. Using Corollary 1 given above, we know the optimum of [math]\displaystyle{ \,(5) }[/math] has the form [math]\displaystyle{ \,W = X\alpha }[/math], and so we can substitute [math]\displaystyle{ \,W = X\alpha }[/math] into [math]\displaystyle{ \,(9) }[/math] and then minimize over [math]\displaystyle{ \,\alpha }[/math].
To obtain this kernelized gradient-based approach, the authors let [math]\displaystyle{ \,X\alpha = UDV }[/math] denote the SVD of [math]\displaystyle{ \,X\alpha }[/math], and then obtained the SVD of [math]\displaystyle{ \,\alpha^tK\alpha }[/math] as [math]\displaystyle{ \,V^tD^2V }[/math]. They thus recovered [math]\displaystyle{ \,D }[/math] from the SVD of [math]\displaystyle{ \,α^tKα }[/math], and used [math]\displaystyle{ \,||W||_{S} = \underset{i} \Sigma \; g(\gamma_i) }[/math] to find [math]\displaystyle{ \,||X\alpha||_S }[/math].
The authors computed (details of the steps are given in the paper in Reference) the gradient of [math]\displaystyle{ \,||X\alpha||_S }[/math] with respect to [math]\displaystyle{ \,\alpha }[/math] as:
Since both [math]\displaystyle{ \,V }[/math] and [math]\displaystyle{ \,D }[/math] can be found from the SVD of [math]\displaystyle{ \,\alpha^tK\alpha }[/math], one can use [math]\displaystyle{ \,(10) }[/math] to find this gradient in terms of [math]\displaystyle{ \,K }[/math] and [math]\displaystyle{ \,\alpha }[/math], and so gradient-based optimizations can be efficiently applied to a kernel [math]\displaystyle{ \,k(x, x^{'}) }[/math].
Spectral Properties of Trace Norm Regularization
When the authors minimized [math]\displaystyle{ \,||F||_{F}^{2} + ||G||_{F}^{2} }[/math] instead of [math]\displaystyle{ \,||W||_{F}^{2} }[/math], they imposed a regularization preference for an [math]\displaystyle{ \,L_1 }[/math] norm, instead of an [math]\displaystyle{ \,L_2 }[/math] norm, on the spectrum of [math]\displaystyle{ \,W }[/math].
When the various target classes share common characteristics, it should be expected that the spectrum of [math]\displaystyle{ \,W }[/math] would not be uniform, since a large portion of this spectrum must be concentrated on a few eigenvalues. The [math]\displaystyle{ \,L_2 }[/math] spectrum regularization due to the Frobenius norm tends to attenuate the spectrum of [math]\displaystyle{ \,W }[/math], and it is therefore not suitable to be used. On the other hand, the [math]\displaystyle{ \,L_1 }[/math] spectrum regularization due to the trace norm does not have this tendency, and it is therefore much better suited for preserving the underlying structures of characteristics that are shared between the target classes.
The authors performed an experiment (details can be found in their paper listed in Reference), in which they found two matrices [math]\displaystyle{ \,W_F }[/math] and [math]\displaystyle{ \,W_{\Sigma} }[/math] using the Frobenius norm regularization ( [math]\displaystyle{ \,(2) }[/math] given above ) and the trace norm regularization ( [math]\displaystyle{ \,(5) }[/math] given above ). Using 500 test instances, the authors found that the generalization error for [math]\displaystyle{ \,W_F }[/math], which was [math]\displaystyle{ \,47 ;\ % }[/math], was much higher than that for [math]\displaystyle{ \,W_{\Sigma} }[/math], which was [math]\displaystyle{ \,31 ;\ % }[/math].
Next, the authors used singular value decomposition to reduce the dimensionality of data before comparing the Frobenius norm regularization and the trace norm regularization. They found that, not only did any SVD dimensionality reduction reduce the test performances of both regularization schemes, the generalization error for the reduced [math]\displaystyle{ \,W_F }[/math] was consistently worse than that for the reduced [math]\displaystyle{ \,W_{\Sigma} }[/math]. They thus concluded that "post-hoc dimensionality reduction could not attenuate the importance of finding the underlying structure as an integral part of the learning procedure"[1]. However, personally, I inquire whether other dimensionality-reduction algorithms such as multidimensional scaling and semidefinite embedding could improve rather than deteriorate the test performances of both regularization schemes?
Experiments
Experiment I: Letter Recognition
In their first experiment, the authors studied performance on the recognition of the 26 characters from UCI's letter dataset. They found the two matrices [math]\displaystyle{ \,W_F }[/math] that resulted from the Frobenius norm regularization ( [math]\displaystyle{ \,(2) }[/math] given above ) and [math]\displaystyle{ \,W_{\Sigma} }[/math] that resulted from the trace norm regularization ( [math]\displaystyle{ \,(5) }[/math] given above ) They exhaustively searched over 15 values between [math]\displaystyle{ \,2^{−9} }[/math] and [math]\displaystyle{ \,2^5 }[/math] to determine the value for the trade-off parameter [math]\displaystyle{ \,C }[/math].
Performance was evaluated using 500 test instances. The generalization errors for [math]\displaystyle{ \,W_F }[/math] and [math]\displaystyle{ \,W_{\Sigma} }[/math] were found to be [math]\displaystyle{ \,10.1 \;% }[/math] and [math]\displaystyle{ \,8.7\;% }[/math], respectively.
More details regarding this experiment can be found in the authors' paper listed in Reference.
Experiment II: Mammal Recognition
In their second experiment, the authors studied performance on the recognition of mammal images. They chose the 72 mammals that have at least 12 profile instances from the mammal benchmark made available by Fink and Ullman (2007). They used approximately 1,000 images for training and a similar number of images for testing. The authors were confident that the target classes of the mammal images shared underlying common characteristics. This was because the mammals in these images could be grouped into four genetically-related families, which are deer, canines, felines, and rodents. These four families of mammals are shown in the following figure (taken from the paper listed in Reference):
The authors found the two matrices [math]\displaystyle{ \,W_F }[/math] that resulted from the Frobenius norm regularization ( [math]\displaystyle{ \,(2) }[/math] given above ) and [math]\displaystyle{ \,W_{\Sigma} }[/math] that resulted from the trace norm regularization ( [math]\displaystyle{ \,(5) }[/math] given above ). They determined the value for the trade-off parameter [math]\displaystyle{ \,C }[/math] using the same procedure from their first experiment. They found that the accuracy of the multiclass SVM resulting from the trace norm regularization, which was [math]\displaystyle{ \,33 \; % }[/math], was higher than that resulting from the Frobenius norm regularization, which was [math]\displaystyle{ \, 29 \; % }[/math].
Theoretically, as discussed above, learning [math]\displaystyle{ \,F }[/math] effectively learns a latent feature space [math]\displaystyle{ \,F^tX }[/math], and, since [math]\displaystyle{ \,F }[/math] is learned jointly over all classes, learning [math]\displaystyle{ \,F }[/math] effectively transfers knowledge between the classes.
The implication is that a new class can be obtained from a few training instances and thus, theoretically, classes that have fewer training instances tend to benefit more from using the trace norm regularization instead of the Frobenius norm regularization. The authors obtained the following figure (taken from the paper listed in Reference) that shows the advantage in terms of how much the classification accuracy improves by using the trace norm regularization instead of the Frobenius norm regularization as a function of training instances:
As a concrete example, the authors looked at one of the most frequent classes, the wombats, which contains 30 training instances. They repeatedly relearned [math]\displaystyle{ \,W_F }[/math] and [math]\displaystyle{ W_{\Sigma} }[/math] as they reduced the number of wombat examples to 24, then to 18, and lastly to 12. When all 30 wombat instances were used, the classification accuracy from using the Frobenius norm regularization was higher than that from using the trace norm regularization by [math]\displaystyle{ \,2.2 \; % }[/math]. When the number of wombat instances was reduced to 24, the accuracy gain from using the Frobenius norm regularization was down to [math]\displaystyle{ \,1.2 \; % }[/math]. When the number of wombat instances was reduced to 18, however, the classification accuracy from using the trace norm regularization was higher than that from using the Frobenius norm regularization by [math]\displaystyle{ \,1.4 \; % }[/math]. When the number of wombat instances was reduced to 12, the accuracy gain from using the trace norm regularization was up to [math]\displaystyle{ \,3.7 \; % }[/math].
Conclusion
As a conclusive remark, I cite the following line from the authors Amit et al.'s paper:
In this paper we suggested an efficient method to extract the underlying structures that characterize a set of target classes. We believe that this approach is part of a trend that emphasizes the importance of sharing representational knowledge in order to enable large scale classification.
Reference
<references />