# Difference between revisions of "maximum-Margin Matrix Factorization"

## Problem Definition

Assume Y as a $n \times m$ matrix containing n user preferences about m movies, such that $y_{ij} = +1$ if user i likes movie j, and $y_{ij} = -1$ if he dislikes it. Due to lack of knowledge about the users’ opinions, Y is partially observable: it has some $\pm1$ and the other cells are unknown. The main goal is to find matrix X such than it preserves the knowledge in Y, and predicts the value of its unknown cells.

Predicting the unknown values in this problem is possible because the rows and columns of Y are assumed to be related to each other. One can translate this relation by the rank of X; in other word, the rank of X indicates the number of features affecting the values in Y. Therefore, minimizing the number of features or the rank of X is equal to finding a simple relation in the given knowledge of Y. In addition, keeping the number of features low is one way to avoid the problem of over-fitting in the prediction process. If the rank of X is equal to the rank of Y, this gives a trivial solution of X = Y.

## Hinge Loss Function

If Y has been fully observed and there is no unknown, the goal is to find X as a simple (low-rank) representation of Y. By choosing the sum-squared error loss function, one can easily use SVD technique to minimize the loss and find X. Despite the fact that this solution is suitable for lowering the rank; it does not work for the case when we have unknowns in Y (due to several local minima problem). In addition one may think about other loss functions instead of SSE (although SSE is convex and has a nice behavior in the optimization problems). For example, in this problem Hinge loss works fine, specially considering the values in Y are restricted to be $\pm1$. Hinge loss here is defined as:

$Hinge(X | Y) = \displaystyle \sum_{(i,j) \in S} \max (0, 1 - x_{ij} \cdot y_{ij})$ (where $\, S = \{(i, j) | y_{ij}$ is known$\}$)

## Prediction Using Matrix Factorization

Another way to tackle this problem is to use Matrix Factorization. In matrix factorization one tries to find a prediction $X = UV^T$ such that the rank of each factor (U and V) is low. In addition, in this problem U and V are meaningful; the ith row of U indicates the importance (weights) of the features for ith user, and so the ith row of V is the characterization of the ith movie based on its features. Here, lowering the rank of X is equal to lowering the rank of its factors, but unfortunately the main problem here is that, rank optimization is not convex.

## Frobenius Norm

Instead of using the rank of each factor, the Frobenius Norm of each factor can be used; in fact, this norm is closely related to the rank of matrix. Moreover, in this way the optimization problem will be convex, and one may get the benefit of using common optimization techniques. The Frobenius norm is defined as:

$\|X\|_{F}^2 = \sum x_{ij}^2 = Tr(XX^T) = \sum \lambda_i^2$ (where $Tr()$ is Trace Function and $\lambda$s are singular values of X)

## Notion of Margin in Collaborative Filtering

Assume one of the factors is fixed and the goal is to predict the other factor. This is the famous problem of linear prediction or SVM. Assume U is fixed, so predicting each column of X is, in fact, solving a SVM for U to find a row in V. Recall in SVM, to maximize the margin the norm of the linear separator $\|\beta\|^2$ should be minimized. Therefore, predicting Y with the maximum margin is equivalent to minimizing the norm of the factor V or $\|V\|_{F}^2$. However, the problem here is to predict both U and V together (which is called collaborative filtering), but at each step, one factor can be thought of as fixed, so minimizing the norm of the other factor gives the maximum margin for this prediction.

## The Optimization Problem and Trace Norm

So far it has been shown that the optimization problem is to find the factors with minimum norm such that the prediction has a low loss:

$\displaystyle \min_{X = UV^T} (\|U\|_{F}^2 + \|V\|_{F}^2) + c \cdot Hinge(X|Y)$

This optimization problem is difficult because both objective function and constraints are not linear. The following lemma helps to change the shape of the optimization problem in a way it becomes easier to solve:

Lemma 1:
$\displaystyle \min_{X = UV^T} \frac{1}{2}(\|U\|_{F}^2 + \|V\|_{F}^2) = \displaystyle \min_{X = UV^T} (\|U\|_{F} \cdot \|V\|_{F}) = \|X\|_{T}$

where $\|X\|_{T}$ is the Trace Norm of X which is defined as:
$\|X\|_{T} = \sum |\lambda_i| = Tr(\sqrt{XX^T})$
In addition, by using SVD of $X = A \Sigma B^T$, one can see $U = A \sqrt{\Sigma}$ and $V = B\sqrt{\Sigma}$ satisfy this lemma.

Base on the lemma 1 the optimization problem can be reformulated as:

$\displaystyle \min_X \|X\|_{T} + c \cdot Hinge(X|Y)$

## Relation Between Rank And Trace Norm

In the literature, the notion of trace norm has been widely used instead of dealing with the rank of matrices. The next theorem explains the relation between the rank and the trace norm:

Theorem 1:
The convex envelope (smallest convex bounding function) of the rank function, on matrices with unit spectral norm, is the trace norm function.
Spectral Norm of a matrix is the absolute value of its largest eigenvalue.

In addition it can be easily shown the relation between the norm, trace norm and rank of a matrix:

$\forall X: \|X\|_{F} \leq \|X\|_{T} \leq \sqrt{Rank(X)} \cdot \|X\|_{F}$

Base on this relation, and the fact that the trace norm is a convex function, it can be shown:

$\{X| \|X\|_{F} \leq \alpha\} = conv \{uv^T| u \in \Re^n , v \in \Re^m , |u| = |v| = \alpha\}$

which shows the set of matrices with a bounded trace norm is Convex, and it seems this set has the lowest rank matrices on its boundary. Thus, this optimization problem is in fact the process of searching in a convex set to optimize a convex function.

## Soft Margin Optimization

As it is possible in SVM that no linear separator can be found to satisfy the constraints (classes are not linearly separable), there exists some Y for which no factorization preserves all the knowledge from Y. So the same solution as soft-margin SVM can be used here, and one can add slack variables to the loss function:

$\displaystyle \min_X \|X\|_{T} + c \cdot \displaystyle \sum_{(i,j) \in S} \xi_{ij}$
s.t. $\forall (i,j) \in S: \xi_{ij} \geq 0$ , $x_{ij} \cdot y_{ij} \geq 1 - \xi_{ij}$

## Using Semi-Definite Programming

Now this optimization problem is easy to understand, and also it is convex with linear constrains. To solve, it should be reformulated to one of the known convex optimization problems. Next lemma shows how Semi-Definite Programming can be used to solve this optimization:

Lemma 2:
$\forall X \in \Re^{n \cdot m}$ and $t \in \Re : \|X\|_{T} \leq t \Longleftrightarrow \exists A, B$ s.t. $M = \left( \begin{array}{cc} A & X \\ X^T & B \\ \end{array} \right) \succeq 0$ and $Tr(M) \leq 2t$

Therefore the last optimization problem can be formulated as a linear, convex SDP optimization problem:

$\displaystyle \min_{M \succeq 0} Tr(M) + c \cdot \displaystyle \sum_{(i,j) \in S} \xi_{ij}$
s.t. $M = \left( \begin{array}{cc} A & X \\ X^T & B \\ \end{array} \right)$ and $\forall (i,j) \in S: \xi_{ij} \geq 0$ , $x_{ij} \cdot y_{ij} \geq 1 - \xi_{ij}$