# graph Laplacian Regularization for Larg-Scale Semidefinite Programming

## Introduction

This paper<ref>K. Q. Weinberger et al. Graph Laplacian Regularization for Larg-Scale Semidefinite Programming, Advances in neural information processing systems, 2007 - cs.utah.edu </ref> introduces a new approach for the discovery of low dimensional representations of high-dimensional data where, in many cases, local proximity measurements are also available. Sensor localization is a quite well-known example in this field. Existing approaches use semi-definite programs (SDPs) with low rank solutions from convex optimization methods. However, the SDP approach doesn’t scale well for large inputs. The main contribution of this paper is to use matrix factorization for solving very sophisticated problems of the above type that lead to much smaller and faster SDPs than previous approaches. This factorization comes from expanding the solution of the original problem in terms of the bottom eigenvectors of a graph laplacian. As the smaller SDPs coming from this factorization are only an approximation of the original problem, it can be refined using gradient-descent. The approach has been illustrated on localization of large scale sensor networks.

## Sensor localization

Assuming sensors can only estimate their local pairwise distances to nearby sensors via radio transmitters, the problem is to identify the whole network topology. In other words, knowing that we have n sensors with $d_{ij}$ as an estimate of local distance between adjacent sensors, i and j, the desired output would be $x_1, x_2, ..., x_n \in R_2$ as the planar coordinates of sensors.

### Work on this issue so far

work on this issue so far starts with minimizing sum-of-squares loss function as
$\,\min_{x_1,...,x_n}\Sigma_{i\sim j}{(\|x_i-x_j\|^2-d_{ij}^2)^2}$ (1)
and adding a centering Constraint (assuming no sensor location is known in advance) as
$\,\|\Sigma_i{x_i}\|^2 = 0$ (2)
The problem here is that the optimization is not convex and is more likely to be trapped by local minima. For solving this problem, an $n \times n$ inner product matrix X is defined as $X_{ij} = x_i \times x_j$ and by relaxing the constraint that sensor locations $x_i$ lie in the $R^2$ plane , the following convex notation will be obtained:
Minimize: $\,\Sigma_{i\sim j}{(X_{ii}-2X_{ij}+X_{jj}-d_{ij}^2)^2}$ (3)
subject to: (i) $\,\Sigma_{ij}{X_{ij}=0}$ and (ii) $X \succeq 0$
The vectors $\,x_i$ will lie in a subspace with dimensionality equal to the rank of the solution X. Projecting $x_i$s into their 2D subspace of maximum variance, obtainied from the top 2 eigenvectors of X, will get planar coordinates. However, the higher the rank of X, the greater the information loss after projection. Increasing the error of projection with increased rank leads us to add a low rank, or equivalently, high trace constraint. Therefore, an extra term is added to favor solutions with high variance(high trace):
Maximize: $\,tr(X)-v\Sigma_{i\sim j}{(X_{ii}-2X_{ij}+X_{jj}-d_{ij}^2)^2}$ (4)
subject to: (i) $\,\Sigma_{ij}{X_{ij}=0}$ and (ii) $\,X \succeq 0$
where the parameter $v\gt 0$ balances the trade-off between maximizing variance and preserving local distances (MVU).

## Matrix factorization

Assume G is a neighborhood graph defined by the sensor network and location of sensors is a function defined over the nodes of this graph. Functions on a graph can be approximated using eigenvectors of graph’s Laplacian matrix as basis functions (spectral graph theory).

Graph Laplacian is defined by:

$L_{i,j}= \left\{\begin{matrix} deg(v_i) & \text{if } i=j \\ -1 & \text{if } i\neq j \text{ and } v_i \text{ adjacent } v_j \\ 0 & \text{ otherwise}\end{matrix}\right.$

Sensor's location can be approximated using the m bottom eigenvecotrs of the Laplacian matrix of G. Expanding these locations yields a matrix factorization for X so that : $x_i\approx\Sigma_{\alpha = 1}^m Q_{i\alpha}y_{\alpha}$
where Q is the $n*m$ matrix with m bottom eigenvecors of Laplacian matrix and $y_{\alpha}$ is unknown and depends on $d_{ij}$. Now, if we define the inner product of theses vectors as $Y_{\alpha\beta} = y_{\alpha}y_{\beta}$ we will get the factorized matrix $X\approx QYQ^T$ (6)
Using this approximation, we can solve an optimization for Y that is much smaller than X. Since Q stores mutulaly orthogonal eigenvectors we can imply $tr(Y)=tr(X)$. In addition, $QYQ^T$ satisfies centering constraint because uniform eigenvectors are not included. Therefore, the optimization would change to to folloing equations:

  Maximize:	$tr(Y)-v\Sigma_{i\sim j}{((QYQ^T)_{ii}-2(QYQ^T)_{ij}+(QYQ^T)_{jj}-d_{ij}^2)^2}$ (objective function)
subject to: $Y \succeq 0$


## Formulation as SDP

Recall that our ultimate aim is to formulate the optimization problem as a SDP. However, the objective function, as formulated in the last section, is a quadratic function of the elements of the matrix $Y \,$. In this subsection, we show how to transform the quadratic objective to a linear objective, via the Schur complement lemma.

First of all, note that by concatenating all the columns of the m-by-m matrix $Y \,$ into a vector $y \in R^{m^2}\,$, the objective function takes the form $\,b^Ty - y^TAy \,$ (up to an additive constant), where the matrix $A\,$ collects all the quadratic coefficients in the objective function while the vector $b \,$ collects all the linear coefficients. Note that the matrix $A\,$ is semi-positive definite because the second and quadratic term in the objective function is non-negative.

By defining a dummy variable $l \,$, the optimization problem can be written as

  Maximize:   $\,b^Ty - l$
subject to: (i) $Y \succeq 0$ and (ii) $l \geq y^TAy$.


The formulation of the optimization problem as a SDP now remains to rewrite the second constraint $l \geq y^TAy$ into a linear or positive semidefinite constraint. This is accomplished by the Schur complement lemma.

### Schur Complement Lemma

In this section, we illustrate how to use the lemma instead of proving the lemma. Readers are referred to the book "Introduction to algorithms" by Thomas H. Cormen for the formal definition of Schur complement and the proof of the Schur Complement Lemma.

Suppose

$Z=\left[\begin{matrix} I & A^{\frac{1}{2}}y \\ (A^{\frac{1}{2}}y)^T & l \end{matrix}\right]$

is a positive semi-definite matrix, then by definition, $x^T Z x \geq 0 \,$ for any vector $x\,$. Let us break $x\,$ into two subvectors $x_1 \,$ and $x_2 \,$, where $x_1 \,$ has a dimension compatible with the matrix $I\,$ in $Z \,$ and $x_2 \,$ is a real number. Then,

\begin{align} x^T Z x & = ( {x_1}^T {x_2}^T ) \left[\begin{matrix} I & A^{\frac{1}{2}}y \\ (A^{\frac{1}{2}}y)^T & l \end{matrix}\right] \left[\begin{matrix} x_1 \\ x_2 \end{matrix}\right] \\ & = ( {x_1}^T {x_2}^T ) \left[\begin{matrix} x_1 + (A^{\frac{1}{2}}y) x_2 \\ (A^{\frac{1}{2}}y)^T x_1 + l x_2 \end{matrix}\right] \\ & ={x_1}^T x_1 + {x_1}^T (A^{\frac{1}{2}}y) x_2 + {x_2}^T (A^{\frac{1}{2}}y)^T x_1 + l \,{x_2}^T x_2 \\ & = (x_1 + (A^{\frac{1}{2}}y) x_2)^T (x_1 + (A^{\frac{1}{2}}y) x_2) + {x_2}^T ( l - (A^{\frac{1}{2}}y)^T(A^{\frac{1}{2}}y)) x_2 \,\,\,\text{(completing square)}\\ & = (x_1 + (A^{\frac{1}{2}}y) x_2)^T (x_1 + (A^{\frac{1}{2}}y) x_2) + {x_2}^T ( l - y^T A y) x_2 \\ \end{align}

Note that we can take $x_1 = - (A^{\frac{1}{2}}y)^T x_2 \,$ to make the first term vanish. This leaves ${x_2}^T ( l - y^T A y) x_2 \geq 0 \,$, which is clearly equivalent to $l \geq y^T A y \,$ because $x_2\,$ is a real number.

The above calculation allows us to formulate the optimization problem as the following SDP:

  Maximize:   $\,b^Ty - l$
subject to: (i) $Y \succeq 0$ and (ii) $\left[\begin{matrix} I & A^{\frac{1}{2}}y \\ (A^{\frac{1}{2}}y)^T & l \end{matrix}\right] \succeq 0$.


By putting in this form, The only variables of the SDP are the $m(m+1)/2$ elements of $Y\,$ and the unknown scalar $l\,$. The only constraints are the positive semidefinteness of Y and the linear matrix inequality of size $m^2*m^2$. It is worth noting that the complexity of the SDP does not depend on the number of nodes or edges in the network.

As the matrix factorization only provides an approximation to the global minimum, a refinement is needed by using this result as the initial starting point for gradient descent in the first equation. It can be done in 2 steps:
First, starting from the m-dimensional solution of eq. (6), use conjugate gradient methods to maximize the objective function in eq. (4)
Second, project the results from the previous step into the R2 plane and use conjugate gradient methods to minimize the loss function in eq. (1). The conjugate gradient method is an iterative method for minimizing a quadratic function where its Hessian matrix (matrix of second partial derivatives) is positive.

## Results

The figure <ref> The same paper. Figure 2 </ref> belowshows sensor locations inferred for n = 1055 largest cities in continental US.Local distances were estimated up to 18 neighbors within radius r = 0.09. Local measurements were corrupted by 10% Gaussian noise over the true local distance. Using m = 10 bottom eigenvectors of graph Laplacian, the solution provides a good initial starting point(left picture) for gradient-based improvement. The right picture is senssor locations after the improvement.

File:fig2L.jpg
Initial Sensor locations
File:fig2R.jpg

The second simulated network, the figure<ref> The same paper. Figure 3 </ref> below, placed nodes at n=20,000 uniformly sampled points inside the unit square. local distances were estimated up to 20 other nodes within radius r = 0.06. Using m = 10 bottom eigenvectors of graph Laplacian, 19s was taken to construct and solve the SDP and 52s was taken for 100 iterations in conjugate gradient descent.

File:fig3.jpg
Results on a simulated network with n=20000 uniformly distributed nodes inside a centerd unit squre

For the simulated networks with nodes at US cities, the figure<ref> The same paper. Figure 4 </ref> below plots the loss function in eq. (1) vs. number of eigenvectors. It also plots computation time vs. number of eigenvectors. It can be inferred from the figure that there is a trade-off between getting better solution and increasing computation time. Also, we can see that $m\approx 10$ best manages this trade-off.

File:fig4.jpg
Left: The value of loss function. Right: The computation time

## Conclusion

An approach for inferring low dimensional representations from local distance constraints using MVU was proposed.
Using matrix factorization computed from the bottom eigenvectors of the graph Laplacian was the main idea of the approach.
The initial solution can be refined by local search methods.
This approach is suitable for large input and its complexity does not depend on the input.

<references/>