# Introduction

In their paper Uncovering Shared Structures in Multiclass Classification <ref>Yonatan Amit, Michael Fink, Nathan Srebro, Shimon Ullman. Uncovering Shared Structures in Multiclass Classification. </ref>, the authors Amit et al. wrote about how hidden structure of some common characteristics among different classes can be utilized to improve the accuracy in multiclass classification. This notion is often called learning-to-learn or interclass transfer (Thrun, 1996) <ref> Learning to learn: Introduction. Kluwer Academic Publishers.</ref>.

The uncovering of such hidden structure was accomplished by a mechanism that learns the underlying characteristics that are shared between the target classes. The benefits of finding such common characteristics was demonstrated in the context of large margin multiclass linear classifiers.

Accurate classification of an instance when there is a large number of target classes is a major challenge in many areas such as object recognition, face identification, textual topic classification, and phoneme recognition.

Usually, when there is a large number of classes, the classes are strongly related to each other and have some underlying common characteristics. As a simple example, even though there are many different kinds of mammals, many of them share common characteristics such as having a striped texture. If such true underlying characteristics that are common to the many different classes can be found, then the effective complexity of the multiclass problem can be significantly reduced.

Simultaneously learning the underlying structure shared by classes is a challenging optimization task. The usual goal of past heuristics was to extract powerful non-linear hidden characteristics. However, this goal was usually not convex, and it often resulted in local minima. In this paper, the authors instead modeled the shared characteristics between classes as linear transformations of the input space, so their model was a linear mapping of the shared features of classes that was followed by a multiclass linear classifier. Their model was not only learned in a convex manner, it also significantly improved the accuracy of multiclass linear classifiers.

# Formulation

The ultimate goal of multiclass classification is to learn a mapping $\,H : \mathcal{X} \mapsto \mathcal{Y}$ from instances in $\,\mathcal{X}$ to labels in $\,\mathcal{Y} = \{1, \dots , k\}$.

In this paper, the authors considered a linear classifiers over $\,\mathcal{X} = R^{n}$ that are parametrized by a vector of weights $\,W_y \in R^{n}$ for each class $\,y \in \mathcal{Y}$. These linear models were of the form:

$\,H_W(x) = \arg\max_{y \in \mathcal{Y}} \;W_{y}^{t} x \;\;\; (1)$

In the paper, the authors adopted the suggestion of Crammer and Singer (2001) <ref> Crammer, K., & Singer, Y. (2001). On the algorithmic implementation of multiclass kernel-based vector machines. JMLR.</ref>, and learned the weights by minimizing a trade-off between an average empirical loss and the following regularizer:

$\underset{y}\Sigma\;||W_y||^2 = ||W||_{F}^2$, where $\,||W||_F$ is the Frobenius norm of $\,W$ whose columns are the vectors $\,W_y$. The loss function suggested by Crammer et al. is the maximal hinge loss made over all comparisons between the correct class and an incorrect class, and is given as $\,l(W;(x,y)) = \underset{y^' \ne y}max[1 + W_{y^'}^tx - W_{y}^tx]_{+}$, where $\,[z]_{+} = max(0,z)$. Then, to find the weights for a trade-off parameter C, the authors used the following learning rule:

$\underset{W} {\min}\;\; \frac{1}{2} ||W||_{F}^2 + C \sum_{i=1}^{m} l(W;(x_i,y_i)) \;\;\; (2)$

This formulation reduces to the Support Vector Machine (SVM), for a binary classification problem. For a number of classes, larger than 2, the formulation requires a margin for each pair of classes, and for each training example, penalizes when the margin constraint is violated. This is viewed as a generalization of SVMs.

Since the authors' goal was to obtain a classifier $\,W$ that is better at classifying new instances by being able to extract characteristics that are shared among classes, they modeled each common characteristic $\,r$ as a linear function of the input vectors $\,x$, denoted $\,F_r^{t} (x)$. In this way, the activation of each class $\,y$ is a linear function $\,G_y^{t}(F^t x)$ of vectors of common characteristics among classes $\,F^t x$ rather than being a linear function of the input vectors $\,x$.

The model in the paper substitutes the weight matrix $\,W \in \mathbb{R}^{n\times k}$ by $\,W = FG$, where $\,F \in \mathbb{R}^{n\times p}$ is a weight matrix whose columns define the $\,p$ common characteristics among classes, and $\,G \in \mathbb{R}^{p\times k}$ whose columns predict the classes from on the $\,p$ common characteristics, as follows:

$H_{G,F}(x) = \arg\max_{y \in \mathcal{Y}}\;G_y^{t} (F^t x) = \arg\max_{y \in \mathcal{Y}}\;(FG)_y^{t} x \;\;\; (3)$

The authors have showed that regularizing the decomposition $\,FG$ instead of the Frobenius norm of the weight matrix $\,W$ can make the resulting model better at generalizing to new instances.

The authors' goal was to simultaneously learn the common characteristics among classes ( $\,F$ ) and the class weights ( $\,G$ ). In addition to regularizing $\,||G||_F$, they also $\,||F||_F^{2}$, which led to the following learning rule

$\underset{F,G}{\min} \;\; \frac{1}{2} ||F||_F^{2} + \frac{1}{2} ||G||_F^{2} + C \sum_{i=1}^{m} l(FG;(x_i,y_i)) \;\;\; (4)$

The number of characteristics shared by classes ( $\,p$ ) is not limited because, as seen in $\,(3)$, the regularization uses the norms rather than the dimensionalities of $\,F$ and $\,G$.

The optimization problem in $\,(4)$ is not convex. However, instead of learning $\,F$ and $\,G$, the authors considered the trace-norm of $\,W$, which is $\,||W||_{\Sigma} = \underset{FG = W} {\min} \;\; \frac{1}{2} (||F||_F^{2} + ||G||_F^{2}) = \underset{i} \Sigma | \gamma_i |$. The trace-norm is a convex function of $\,W$, and it can be expressed using $\,W$'s singular values, which are the $\,\gamma_i$'s. The authors then re-expressed the optimization problem in $\,(4)$ as a convex optimization problem that finds $\,W$, as follows:

$\underset{W}{\min} \;\; ||W||_{\Sigma} + C \sum_{i=1}^{m} l(W;(x_i,y_i)) \;\;\; (5)$

The optimization problem in $\,(5)$ could then be formulated as a semi-definite program(SDP) and be easily solved.

# Dualization and Kernelization

Taking into consideration the advantages of large-margin methods, the authors obtained the dual form and then the kernelized form of $\,(5)$.

Using Lagrange duality (more detailed available here, they obtained the dual of $\,(5)$, as follows:

$\, max \;\; \underset{i} \Sigma (-Q_{i_{y_i}}) \;\;\; s.t. \;\;\; \forall_{i,j \ne y_i} \;\; Q_{ij} \ge 0, \;\;\; \forall_i \;\; (-Q_{i_{y_i}}) = \underset{j \ne y_i} \Sigma Q_{ij} \le c, \;\;\; ||XQ||_2 \le 1 \;\;\; (6)$

In $\,(6)$, $\,Q \in \mathbb{R}^{m\times k}$ is the dual Lagrange variable and $\,||XQ||_2$ is the spectral norm (the maximal singular value) of $\,XQ$. $\,(5)$ can also be written as a semi-definite program.

The authors re-expressed the spectral norm constraint in (6), which was $\,||XQ||_2 \le 1$, as $\,||Q^t(X^tX)Q||_2 \le 1.$, and re-expressed $\,(6)$ as:

$\, max \;\; \underset{i} \Sigma (-Q_{i_{y_i}}) \;\;\; s.t. \;\;\; \forall_{i,j \ne y_i} \;\; Q_{ij} \ge 0, \;\;\; \forall_i \;\; (-Q_{i_{y_i}}) = \underset{j \ne y_i} \Sigma Q_{ij} \le c, \;\;\; ||Q^tKQ||_2 \le 1 \;\;\; (7)$

In $\,(7)$, $\,K = X^tX$ is the Gram matrix. $\,(7)$ is a convex problem on $\,Q$ that involves a semidefinite constraint (the spectral-norm constraint) on $\,Q^tKQ$ that does not depend on the size of the training set, but rather on the number of classes $\,k$.

Using $\,(7)$ and Theorem 1 that is given below (the proof is in the paper listed in Reference), the authors were able to find the optimum weight matrix $\,W$ in terms of the dual optimum $\,Q$, and they were thus able to use the kernel mechanism for prediction of new instances.

Theorem 1 (Representer Theorem):

Let $\,Q$ be the optimum of $\,(7)$ and $\,V$ be the matrix of eigenvectors of $\,Q^'KQ$, then for some diagonal $\,D \in \mathbb{R}^{k\times k}$, the matrix $\,W = X(QV^tDV)$ is an optimum of $\,(6)$, with $\,||W||_{\Sigma} = \Sigma_r \;|D_{rr}|$.

By substituting $\,W = XQV^tDV$ into $\,(6)$, the first term becomes $\,\Sigma_r \;|D_{rr}|$, while the second term is piecewise linear in $\,KQV^tDV$. Therefore, we can easily solve a simple linear program (LP) in the $\,k$ unknown entries on the diagonal of $\,D$ to find $\,D$ and hence find $\,W$. It should be noted that the number of variables in this LP depends only on the number of classes ($\,k$), and that the entire procedure of solving $\,(7)$ which consists of extracting $\,V$ and then finding $\,D$ and thus $\,W$ uses only the Gram matrix ($\,K$).

Note that, the following corollary immediately follows from Theorem 1:

Corollary 1:

There exists $\,\alpha \in \mathbb{R}^{m\times k}$ such that $\,W = X\alpha$ is an optimum of $\,(5)$.

# Learning a Latent Feature Representation

As mentioned above, learning $\,F$ can be interpreted as learning a latent feature space $\,F^tX$, which is useful for prediction. Because $\,F$ is learned jointly over all classes, it effectively transfers knowledge between the classes.

The authors considered $\,k$ binary classification tasks, and used $\,W_j$ as a linear predictor for the $\,j$th task. Using an SVM to learn each class independently, they came up with the following learning rule:

$\underset{W}{\min} \;\; \underset{j}\Sigma\; (\frac{1}{2}\; ||W_i||^2 + C\;l_j(W_j)) = \underset{W}{\min}\; \frac{1}{2}\; ||W||_F^2 + C \;\underset{j}\Sigma \;l_j(W_j) \;\;\; (8)$, where $\,l_j(W_j)$ is the total (hinge) loss of $\,W_j$ on the training examples for task $\,j$.

Replacing the Frobenius norm with the trace norm ( $\, \underset{W}{\min}\; ||W||_{\Sigma} + C\; \underset{j} \Sigma\; l_j(W_j)$ ) corresponds to learning a feature representation $\,\phi(x) = F^tx$ that allows good, low-norm prediction for all $\,k$ tasks. After such a feature representation is learned, a new task can be learned directly using the feature vectors $\,F^tx$ and standard SVM whilst taking advantage of the transferred knowledge from the other, previously-learned, tasks. It should be noted that we can learn a feature representation $\,\phi(x) = F^tx$ even if we do not have the feature representation $\,X$. This is because we only use a kernel $\,k$ to obtain the Gram matrix $\,K = X^tX$.

Using Corollary 1 given above, the goal of the authors was to obtain a matrix $\,\alpha$ such that $\,W = X \alpha$ is an optimum of $\,(5)$. Let $\,W = UDV$ be the singular value decomposition of $\,W$. Then, $F = U \sqrt{D}$ is an optimum of $\,(4)$. The singular value decomposition of $\,\alpha^tK\alpha = \alpha^tX^tX\alpha = W^tW = V^tD^2V$ can then be calculated to obtain $\,D$ and $\,V$. The result is $\,D^{−1/2}V\alpha^tK = D^{−1/2}V(\alpha^tX^t)X = D^{−1/2}VW^tX = D^{−1/2}VV^tDU^tX = D^{1/2}U^tX = F^tX$, which provides an explicit representation of the learned feature space that can be calculated from $\,K$ and $\,\alpha$ alone.

# Optimization

The optimization problem $\,(5)$ can be formulated as a semi-definite program (SDP) and easily solved using off-the-shelf SDP solvers to find the optimal $\,W$. However, such solvers are typically based on interior point methods and so they scale poorly with the size of the problem. As a result, the authors optimized $\,(5)$ using using simple though powerful gradient-based methods.

Because $\,(5)$ is non-differentiable, gradient-based cannot be directly applied to it. The authors therefore considered a smoothed approximation to $\,(5)$. This was done by replacing the trace-norm of $\,W$ ( given above as $\,||W||_{\Sigma} = \underset{FG = W} {\min} \;\; \frac{1}{2} (||F||_F^{2} + ||G||_F^{2}) = \underset{i} \Sigma\; | \gamma_i |$ ) with a smooth proxy, and this was in turn done by replacing the non-smooth absolute value in this trace norm with a smooth function $\,g$ that was defined as:

$\,g(\gamma) = \left\{\begin{matrix} {\frac {\gamma^{2}}{2r} + \frac {r}{2}} &\text{, if } \gamma \le r \\ |\gamma| &\text{, if otherwise } \end{matrix}\right.$

In $\,g(\gamma)$, $\,r$ is some predefined cutoff point. It is easy to see that $\,g$ is continuously differentiable.

Using $\,g$, the authors replaced the trace norm $\,||W||_{\Sigma} = \underset{i} \Sigma \; | \gamma_i |$ by $\,||W||_{S} = \underset{i} \Sigma \; g(\gamma_i)$, where the $\,\gamma_i$'s are the singular values of $\,W$.

Ultimately, the authors re-expressed $\,(5)$ with the following convex and continuously-differentiable function:

$\underset{W}{\min} \;\; ||W||_{S} + C \sum_{i=1}^{m} l_{S}(W;(x_i,y_i)) \;\;\; (9)$

In the following figures (taken from the authors' paper given in Reference), the left one illustrates how well the matrix $\,W$ found by solving $\,(9)$ using conjugate gradient descent approximates the matrix $\,W$ found by solving $\,(5)$ using an interior point SDP solver (SDP3), and the right one illustrates how much better the gradient-based optimization in $\,(9)$ is compared to the semi-definite programming optimization problem $\,(5)$ in terms of scaling to an increasing number of training instances.

As mentioned above, a kernelized gradient-based optimization approach can be used to solve $\,(9)$ if we do not have the feature vectors $\,X$ and we only have the Gram matrix $\,K = X^tX$. Using Corollary 1 given above, we know the optimum of $\,(5)$ has the form $\,W = X\alpha$, and so we can substitute $\,W = X\alpha$ into $\,(9)$ and then minimize over $\,\alpha$.

To obtain this kernelized gradient-based approach, the authors let $\,X\alpha = UDV$ denote the SVD of $\,X\alpha$, and then obtained the SVD of $\,\alpha^tK\alpha$ as $\,V^tD^2V$. They thus recovered $\,D$ from the SVD of $\,α^tKα$, and used $\,||W||_{S} = \underset{i} \Sigma \; g(\gamma_i)$ to find $\,||X\alpha||_S$.

The authors computed (details of the steps are given in the paper in Reference) the gradient of $\,||X\alpha||_S$ with respect to $\,\alpha$ as:

$\, \frac{\delta ||X \alpha ||_S}{\delta \alpha} = K \alpha V^tD^{-1}g^{'}(D)V \;\;\; (10)$

Since both $\,V$ and $\,D$ can be found from the SVD of $\,\alpha^tK\alpha$, one can use $\,(10)$ to find this gradient in terms of $\,K$ and $\,\alpha$, and so gradient-based optimizations can be efficiently applied to a kernel $\,k(x, x^{'})$.

# Spectral Properties of Trace Norm Regularization

When the authors minimized $\,||F||_{F}^{2} + ||G||_{F}^{2}$ instead of $\,||W||_{F}^{2}$, they imposed a regularization preference for an $\,L_1$ norm, instead of an $\,L_2$ norm, on the spectrum of $\,W$.

When the various target classes share common characteristics, it should be expected that the spectrum of $\,W$ would not be uniform, since a large portion of this spectrum must be concentrated on a few eigenvalues. The $\,L_2$ spectrum regularization due to the Frobenius norm tends to attenuate the spectrum of $\,W$, and it is therefore not suitable to be used. On the other hand, the $\,L_1$ spectrum regularization due to the trace norm does not have this tendency, and it is therefore much better suited for preserving the underlying structures of characteristics that are shared between the target classes.

The authors performed an experiment (details can be found in their paper listed in Reference), in which they found two matrices $\,W_F$ and $\,W_{\Sigma}$ using the Frobenius norm regularization ( $\,(2)$ given above ) and the trace norm regularization ( $\,(5)$ given above ). Using 500 test instances, the authors found that the generalization error for $\,W_F$, which was $\,47 ;\ %$, was much higher than that for $\,W_{\Sigma}$, which was $\,31 ;\ %$.

Next, the authors used singular value decomposition to reduce the dimensionality of data before comparing the Frobenius norm regularization and the trace norm regularization. They found that, not only did any SVD dimensionality reduction reduce the test performances of both regularization schemes, the generalization error for the reduced $\,W_F$ was consistently worse than that for the reduced $\,W_{\Sigma}$. They thus concluded that "post-hoc dimensionality reduction could not attenuate the importance of finding the underlying structure as an integral part of the learning procedure"[1]. However, personally, I inquire whether other dimensionality-reduction algorithms such as multidimensional scaling and semidefinite embedding could improve rather than deteriorate the test performances of both regularization schemes?

# Experiments

Experiment I: Letter Recognition

In their first experiment, the authors studied performance on the recognition of the 26 characters from UCI's letter dataset. They found the two matrices $\,W_F$ that resulted from the Frobenius norm regularization ( $\,(2)$ given above ) and $\,W_{\Sigma}$ that resulted from the trace norm regularization ( $\,(5)$ given above ) They exhaustively searched over 15 values between $\,2^{−9}$ and $\,2^5$ to determine the value for the trade-off parameter $\,C$.

Performance was evaluated using 500 test instances. The generalization errors for $\,W_F$ and $\,W_{\Sigma}$ were found to be $\,10.1 \;%$ and $\,8.7\;%$, respectively.

More details regarding this experiment can be found in the authors' paper listed in Reference.

Experiment II: Mammal Recognition

In their second experiment, the authors studied performance on the recognition of mammal images. They chose the 72 mammals that have at least 12 profile instances from the mammal benchmark made available by Fink and Ullman (2007). They used approximately 1,000 images for training and a similar number of images for testing. The authors were confident that the target classes of the mammal images shared underlying common characteristics. This was because the mammals in these images could be grouped into four genetically-related families, which are deer, canines, felines, and rodents. These four families of mammals are shown in the following figure (taken from the paper listed in Reference):

The authors found the two matrices $\,W_F$ that resulted from the Frobenius norm regularization ( $\,(2)$ given above ) and $\,W_{\Sigma}$ that resulted from the trace norm regularization ( $\,(5)$ given above ). They determined the value for the trade-off parameter $\,C$ using the same procedure from their first experiment. They found that the accuracy of the multiclass SVM resulting from the trace norm regularization, which was $\,33 \; %$, was higher than that resulting from the Frobenius norm regularization, which was $\, 29 \; %$.

Theoretically, as discussed above, learning $\,F$ effectively learns a latent feature space $\,F^tX$, and, since $\,F$ is learned jointly over all classes, learning $\,F$ effectively transfers knowledge between the classes. The implication is that a new class can be obtained from a few training instances and thus, theoretically, classes that have fewer training instances tend to benefit more from using the trace norm regularization instead of the Frobenius norm regularization. The authors obtained the following figure (taken from the paper listed in Reference) that shows the advantage in terms of how much the classification accuracy improves by using the trace norm regularization instead of the Frobenius norm regularization as a function of training instances:

As a concrete example, the authors looked at one of the most frequent classes, the wombats, which contains 30 training instances. They repeatedly relearned $\,W_F$ and $W_{\Sigma}$ as they reduced the number of wombat examples to 24, then to 18, and lastly to 12. When all 30 wombat instances were used, the classification accuracy from using the Frobenius norm regularization was higher than that from using the trace norm regularization by $\,2.2 \; %$. When the number of wombat instances was reduced to 24, the accuracy gain from using the Frobenius norm regularization was down to $\,1.2 \; %$. When the number of wombat instances was reduced to 18, however, the classification accuracy from using the trace norm regularization was higher than that from using the Frobenius norm regularization by $\,1.4 \; %$. When the number of wombat instances was reduced to 12, the accuracy gain from using the trace norm regularization was up to $\,3.7 \; %$.

# Conclusion

As a conclusive remark, I cite the following line from the authors Amit et al.'s paper:

In this paper we suggested an efficient method to extract the underlying structures that characterize a set of target classes. We believe that this approach is part of a trend that emphasizes the importance of sharing representational knowledge in order to enable large scale classification.

<references />