# probabilistic Matrix Factorization

## Contents

## Introduction and Motivation (Netflix Dataset)

The problem provides an approach to collaborative filtering that scales linearly with observations and, thus, can handle large datasets. The methods are discussed in the context of the Netflix dataset.
The Netflix dataset was released as part of the Netflix Prize, a contest started by the company in 2006; the goal was see if researchers could develop a prediction algorithm which could best the company's in-house algorithm.
The data released contains over 100 million observations from nearly half a million users, creating an interesting problem space---motivated, of course, by a grand prize of $1,000,000<ref name="PMF">R. Salakhutdinov and A.Mnih. "Probabilistic Matrix Factorization". *Neural Information Processing Systems 21 (NIPS 2008).* Jan. 2008.</ref>.

More formally, the dataset is an matrix where each row corresponds to a user, each column to a movie and the entries to a rating. Ratings take on values . The difficulty in this problem arises from the fact that each user has only rated a subset of movies. A popular paradigm for solving this is to train a low-dimensional factor models---an example of which is collaborative filtering. This approach is based on the idea that the attitudes or preferences of a user can be determined by a small number of unobserved factors. If there is some latent rank- structure underlying these ratings (that is, underlying variables that determine how much a user enjoys a movie) then to find these variables we could factor the matrix.

Formally, given a preference matrix which is , we wish to find an matrix and an matrix such that . is a factor matrix (i.e. for the latent variables, it gives the value for each movie) and gives coefficients of these factors for each user. The actual "Prize" dataset has with over 100 million of these entries having values. To train a low-dimensional factor model for this dataset, one must, under a given loss function, find the best rank- approximation for the observed target matrix .

Taking a probabilistic interpretation of a deterministic technique in machine learning is not a novel paradigm---consider the evolution of the well-known probabilistic latent semantic analysis from its original formulation as latent semantic analysis<ref name="PLSA">Thomas Hofmann. "Probabilistic latent semantic analysis." In Proceedings of the 15th Conference on Uncertainty in AI, pages 289--296, San Fransisco, California, 1999. Morgan Kaufmann.</ref>. The trouble, though, with probabilistic (ie Bayesian) inference is that exact inference may not be possible. For example, consider prior work on probabilistic collaborative filtering<ref name="marlin2003">Benjamin Marlin. Modeling user rating profiles for collaborative filtering. In Sebastian Thrun, Lawrence K. Saul, and Bernhard Scholkopf, editors, NIPS. MIT Press, 2003.</ref><ref name="marlin2004">Benjamin Marlin and Richard S. Zemel. The multiple multiplicative factor model for collaborative filtering. In Machine Learning, Proceedings of the Twenty-first International Conference (ICML 2004), Banff, Alberta, Canada, July 4-8, 2004. ACM, 2004.</ref>. Approximation methods are required to perform inference, like the popular collapsed Gibbs sampling method in the field of information retrieval.

Another difficulty that arises with this dataset is that it is unbalanced. There is a lot of variance in user activity; thus, some rows are much sparser than others. In fact, over 10% of users have fewer than 20 ratings.

There are several existing ways to solve this problem but they often require approximations that are slow and, at times, inaccurate. Another instinctive approach might be to use SVD. However, since there is a lot of missing data in this case, the optimization problem that needs to be solved is not convex. Thus, existing methods do not scale well and often have lower accuracy in cases where a row is sparse.

Instead of constraining the rank of , i.e. the number of factors, researchers have proposed penalizing the norms of and . However, to learn this model, one needs to solve a sparse semi-definite program, and so it is infeasible to use this approach for very large datasets such as this one that has millions of observations.

The proposed solution is to use a probabilistic model - probabilistic matrix factorization<ref name="PMF"/>. Unlike all of the above-mentioned approaches with the exception of the matrix-factorization-based ones, PMF scales well to large datasets. Furthermore, unlike most of the existing algorithms, which have trouble making accurate predictions for users who have very few ratings, PMF performs well on very sparse and imbalanced datasets, such as the Netflix dataset.

## Probabilistic Matrix Factorization

Given the preference matrix described above with entries , the aim is to find a factorization that minimizes the root mean squared error(RMSE) on the test set. An initial attempt is to use a linear model where we assume that there is Gaussian noise in the data. Define to be 1 if is known (i.e. user has rated movie ) and 0 otherwise. Further, let with . Then, we can define a conditional probability of the ratings with hyperparameter

and priors on and with hyperparameters

Then we aim to maximize the log posterior over and (to derive this, simply substitute the definition of and take the log):

where is the number of known entries and is a constant independent of the parameters. We can fix the three variance hyperparameters (which are observation noise variance and prior variances) as constants which reduces the optimization to the first three terms (a sum-of-squared minimization). Then defining for and multiplying by results in a the following objective function

where is the Frobenius norm. Note that if all values are known (i.e. ) then as , reduces to the SVD objective function.

The objective function, (4), can be minimized using the method of steepest descent which is linear in the number of observations.

The following figure (taken from the authors' paper listed in the Reference section) shows the graphical model for Probabilistic Matrix Factorization (PMF):

In this paper, the dot product between user- and movie-specific feature vectors is passed through the logistic function , which bounds the range of predictions, since using a simple linear-Gaussian model makes predictions outside of the range of valid rating values. As well, the ratings from 1 to are mapped to the interval using the function . This ensures that the range of valid rating values matches the range of predictions made by the model. Thus,

## Automatic Complexity Control for PMF Models

In order for PMF models to generalize well to new data, it is essential to apply good *capacity control*. Given sufficiently many factors, a PMF model can approximate any given matrix arbitrarily well.

The simplest way to control a PMF model's capacity is to change the dimensionality of feature vectors. However, this approach is not suitable if the dataset is unbalanced, because any single number of feature dimensions will be too high for some feature vectors and at the same time be too low for other feature vectors.

The use of regularization parameters such as and defined above is a more flexible approach to regularization. Cross-validation is usually the simplest way for finding suitable values for these parameters; however, this approach is usually very computationally expensive.

A method proposed by Nowlan and Hinton <ref name = "NH1992">S.J. Nowlan and G. E. Hinton. "Simplifying neural networks by soft weight-sharing". *Neural Computation*, 4:473-493, 1992.</ref>, which was originally applied to neural networks, can be used to determine suitable values for a PMF model's regularization parameters automatically and at the same time without significantly affecting the model's training time.

The problem of approximating a matrix in the sense by a product of two low-rank matrices, where the low-rank matrices are regularized by penalizing their Frobenius norms, can be expressed as a MAP estimation (described in detail in Jaakkola's lecture slides) in a probabilistic model that has spherical Gaussian priors(also described in detail in the same lecture slides) on the rows of the low-rank matrices. The complexity of the PMF model is controlled by the hyperparameters, which consist of the noise variance () and the parameters of the priors ( and ).

As suggested in <ref name = "NH1992"/>, by introducing priors for the hyperparameters and maximizing the PMF model's log-posterior (more details can be found in Nychka's slides) over both the parameters and the hyperparameters, one can use the training data to automatically control the PMF model's complexity. Thus, for the NetFlix data, by using spherical priors for the user feature vectors and the movie feature vectors, one obtains the standard form of PMF whose values for
and are automatically selected. In summary, to
control the PMF model's capacity automatically and efficiently, we estimate the parameters and hyperparameters by maximizing the log posterior given by

where and are the hyperparameters for the priors over the user feature vectors and the priors over the movie feature vectors, respectively, and is a constant that does not depend on the parameters or hyperparameters.

When the prior distribution is Gaussian and the movie and user feature vectors are kept fixed, the optimal hyperparameters can be expressed in closed form. Thus to simplify learning we alternate between optimizing the hyperparameters and updating the feature vectors using steepest ascent with the values of hyperparameters fixed. When the prior is a mixture of Gaussians, the hyperparameters can be updated by performing a single step of EM. Usually the authors use improper priors for the hyperparameters, but it is easy to extend the closed form updates to handle conjugate priors for the hyperparameters.

## Constrained Probabilistic Matrix Factorization

In the above PMF approach, if a row is very sparse (i.e., a user has rated very few movies) then the estimated values will be approximately equal to the mean rating by other users. Thus, we need a way remedy this. This is done by introducing a constraint and modifying PMF to create a constrained PMF algorithm.

First, we define an indicator matrix whose element is if user has rated movie and is if otherwise. Recall that is the estimated number of latent factors. Let be the latent similarity constraint matrix whose entries measure similarities between the movies and the latent variables.

Then for each user , , we can define

This means that users with similar sets of watched movies will have similar priors for .

Constrained PMF differs from PMF in the second term, so that ; in contrast, in the unconstrained PMF model, and are equal, because the prior mean is fixed at a value of . So, replacing in gives

where, again, is the logistic function.

is regularized by placing a zero-mean spherical Gaussian prior on it; this prior is .

Again, we can transform this maximization into a minimization like with and use gradient descent (linear in number of observations). There is a significant improvement in performance.

The following figure (taken from the authors' paper listed in the Reference section) shows the graphical model for the constrained Probabilistic Matrix Factorization (PMF) model:

## Experimental Results

Consider a subset of the Netflix data set with 50,000 users and 1,850 movies, with 1,082,982 known values. In this dataset, more than half of the rows are sparse (less than 10 entries), which will help to show the improvement offered by constrained PMF.

The learning rate and momentum were set to 0.005 and 0.9 by experiment, respectively. A factorization was created with using SVD, PMF and constrained PMF. The parameters for . The results are shown in the figures below<ref name="PMF"/>. On the left, RMSE is plotted against the number of passes through the data. On the right, the RMSE is plotted against the sparsity of the rows.

Both PMF models outperform SVD and, further, constrained PMF beats PMF. An interesting, but expected, result is that constrained PMF is clearly the best method in the case of sparse rows, but is indistinguishable from PMF when there are many known entries. They both do much better than a simple average of the movie ratings as the amount of information increases.

Another test was done looking at matrix in which the only known data is whether a movie has been rated. Tests show that PMF can use this information to provide better estimates than a simple average.

Results of similar tests on the full Netflix dataset provide comparable results.

## Conclusion

The main problem in this paper was to factor a matrix with many missing values, including many sparse rows, with the hope of using the known values to provide information about the missing values.

PMF provides a probabilistic approach using Gaussian assumptions on the known data and the factor matrices. We can further constrain priors to improve the algorithm, especially in the case of sparse rows.

Experimental results show that PMF and constrained PMF perform quite well.

The next steps might be a completely Bayesian approach which would involve a computationally expensive MCMC step but has shown improvement in preliminary experiments. The authors develop such an approach in subsequent work<ref name = "SalMnihICML08"> R. Salakhutdinov and A. Mnih. "Bayesian Probabilistic Matrix Factorization using Markov chain Monte Carlo", *Proceedings of the International Conference on Machine Learning *, 2008.</ref> and provide code for this Bayesian method at [1].

## References

<references />