inductive Kernel Low-rank Decomposition with Priors: A Generalized Nystrom Method: Difference between revisions

From statwiki
Jump to navigation Jump to search
 
(4 intermediate revisions by one other user not shown)
Line 34: Line 34:
Since both the objective function and the positive semidefinite constraints are convex, there exists a global optimal. The paper uses a gradient mapping strategy <ref name="Nemi1994">
Since both the objective function and the positive semidefinite constraints are convex, there exists a global optimal. The paper uses a gradient mapping strategy <ref name="Nemi1994">
Nemirovski, A. Efficient method in convex programming. Lecture Notes 1994.   
Nemirovski, A. Efficient method in convex programming. Lecture Notes 1994.   
</ref> to find the optimal solution. The method has a iterative gradient descent step and projection step. Given an initial solution <math>S^{(t)}</math> we update it by <math>S^{(t+1)} = S^{(t)}+ \eta^{(t)}\nabla S^{(t)}</math> where <math>\nabla S ^{(t)}</math> is the gradient of the objective function at <math>S</math>. <center><math> \nabla S = 2\lambda(S_0-S) +2E_l^T(E_lSE_l^T-K^*)E_l  </math> </center>The step length <math>\eta^{(t)}</math>is determined by the Armijo Goldstein rule<ref name="Nemi1994">
</ref> to find the optimal solution. The method has a iterative gradient descent step and projection step. Given an initial solution <math>S^{(t)}</math> we update it by <math>S^{(t+1)} = S^{(t)}+ \eta^{(t)}\nabla_{S^{(t)}}</math> where <math>\nabla_{S ^{(t)}}</math> is the gradient of the objective function at <math>S</math>. <center><math> \nabla S = 2\lambda(S_0-S) +2E_l^T(E_lSE_l^T-K^*)E_l  </math> </center>The step length <math>\eta^{(t)}</math>is determined by the Armijo Goldstein rule<ref name="Nemi1994">
Reference
Reference
</ref>. After the descent step, we project the iterate <math>S^{t+1)}</math> onto the set of positive semi-definite cones as follows <center><math>S^{(t+1)} = U^{(t+1)}\Lambda_{+}^{(t+1)}(U^{(t+1)})^T</math></center> where <math>U^{(t+1)}</math> and <math>\Lambda^{(t+1)}</math> are the eigenvectors and eigenvalues of <math>S^{(t+1)}</math>.
</ref>.  
 
'''The descent step:
'''
 
<math>B_{A}^*={\underset{B \succeq 0}{\text{arg min}}} \  tr({\nabla}_{S^{(t)}}B)+{\frac A 2} \| B-S^{(t)} \|_F^2</math>;
 
while <math>J(B_A^*)>J(S^{(t)})+tr({\nabla}_{S^(t)}(B_A^*-S^{(t)}))+{\frac A 2} \| B_A^*-S^{(t)} \|_F^2 </math>
 
<math>\ \qquad</math>  <math>\ \qquad</math>  Increase A by a constant times;
 
<math>\ \qquad</math>  <math>\ \qquad</math>  <math>B_{A}^*={\underset{B \succeq 0}{\text{arg min}}} \  tr({\nabla}_{S^{(t)}}B)+{\frac A 2} \| B-S^{(t)} \|_F^2</math>;
 
end while
 
'''finish descent step.
'''
After the descent step, we project the iterate <math>S^{t+1)}</math> onto the set of positive semi-definite cones as follows <center><math>S^{(t+1)} = U^{(t+1)}\Lambda_{+}^{(t+1)}(U^{(t+1)})^T</math></center> where <math>U^{(t+1)}</math> and <math>\Lambda^{(t+1)}</math> are the eigenvectors and eigenvalues of <math>S^{(t+1)}</math>.


==Initialization==
==Initialization==

Latest revision as of 09:46, 30 August 2017

Introduction

Low-rankness is an important structure widely exploited in machine learning. Low-rank matrix decomposition produces a compact representation of large matrices, which is the key to scaling up a great variety of kernel learning algorithms. However there are still some concerns with existing approaches. First, most of them are intrinsically unsupervised and only focus on numerical approximation of given matrices i.e. cannot incorporate prior knowledge. Second, many decomposition methods, the factorization can only be computed for samples available in the training stage, it difficult to generalize the decomposition to new samples.

This paper introduces a low-rank decomposition algorithm by generalizing the Nystrom method that incorporates side information. The novelty is to provide an interpretation of the matrix completion view of Nystrom method as a bilateral extrapolation of a dictionary kernel, and generalize it to incorporate prior information in computing improved low-rank decompositions. The author claims the two advantages of the method are its generative structure and linear complexity in sample size.

Nystrom method was originated from solving integral equations and was introduced to machine learning community by Williams et al.<ref> Williams, C. and Seeger, M. Using the Nystrom method to speed up kernel machine. Advances in Neural Information Processing System 13, 2001. </ref> Fowlkes et al. <ref> Fowlkes, C., Belongie, S. Chung, F., and Malik, J. Spectral grouping using Nystrom Method. IEEE Transactions on Pattern Analysis and Machine Intellgence, 26(2): 214- 225, 2004.

</ref>. Given a kernel function [math]\displaystyle{ k(.,.) }[/math] and a sample set with underlying distribution [math]\displaystyle{ p(.) }[/math], the Nystrom method aims at solving the following integral equation

[math]\displaystyle{ \int k(x,y)p(y)\Phi_i(y)dy = \lambda_i\Phi_i(x) }[/math]

Here [math]\displaystyle{ \phi_i(x) }[/math] and [math]\displaystyle{ \lambda_i }[/math] are the ith eigen function and eigen value of the operator [math]\displaystyle{ k(.,.) }[/math] with regard to [math]\displaystyle{ p }[/math]. The idea is to draw a set of [math]\displaystyle{ m }[/math] samples [math]\displaystyle{ Z }[/math], called landmark points, from the underlying distribution and approximate the expectation with the empirical average as

[math]\displaystyle{ \frac{1}{m}\sum_{j=1}^{m}k(x,z_j)\Phi_i(z_j) = \lambda_i\Phi_i(x) }[/math]

, by choosing [math]\displaystyle{ x }[/math] as [math]\displaystyle{ z_1, z_2,...,z_m }[/math] as well, the followig eigenvalue decomposition can be obtained [math]\displaystyle{ W\Phi_i = \lambda_i\Phi_i }[/math], where [math]\displaystyle{ W }[/math] a [math]\displaystyle{ m }[/math] by [math]\displaystyle{ m }[/math] is the kernel matrix defined on landmark points, [math]\displaystyle{ \Phi_i }[/math] is a m by 1 matrix and [math]\displaystyle{ \lambda_i }[/math] are the ith eigenvector and eigenvalue of [math]\displaystyle{ W }[/math]. In practice, given a large dataset, the Nystrom method selects [math]\displaystyle{ m }[/math] landmark points [math]\displaystyle{ Z }[/math] with [math]\displaystyle{ m\lt \lt n }[/math] and computes the eigenvalue decomposition of [math]\displaystyle{ W }[/math]. Then the eigenvectors of [math]\displaystyle{ W }[/math] are extrapolated to the whole sample set. Te whole n by n kernel matrix [math]\displaystyle{ K }[/math] can by implicitly reconstructed by

[math]\displaystyle{ K\approx EW^{\dagger}E^{T} }[/math]

where [math]\displaystyle{ W^{\dagger} }[/math] is the pseudo-inverse, and E is the kernel matrix defined on the sample set and landmark points. The Nystrom method requires [math]\displaystyle{ O(mn) }[/math] space and [math]\displaystyle{ O(m^2n) }[/math]time, which are linear in sample size.

Bilateral Extrapolation of Dictionary Kernel

The ijth component of the kernel matrix is constructed as

[math]\displaystyle{ K_{ij} = E_iW^{\dagger}E_j^T }[/math]

Where, [math]\displaystyle{ E_i }[/math] is the ith row of the extrapolation matrix [math]\displaystyle{ E }[/math], i.e. the similarity between any two points [math]\displaystyle{ x_i }[/math] and [math]\displaystyle{ x_j }[/math] is constructed by first computing their respective similarities to the landmark set nd then modulated by the inverse of the similarities among the landmark points [math]\displaystyle{ W^{\dagger} }[/math].


Proposition 1 Given m landmark points [math]\displaystyle{ Z }[/math], use [math]\displaystyle{ K_{ij} = E_iW^{\dagger}E_j }[/math]to construct the similarity between any two samples,[math]\displaystyle{ x_i }[/math] and [math]\displaystyle{ x_j }[/math]. Let [math]\displaystyle{ z_p }[/math] and [math]\displaystyle{ z_q }[/math] be the closest landmark point to [math]\displaystyle{ x_i }[/math]and [math]\displaystyle{ x_j }[/math], respectively. Let [math]\displaystyle{ d_p=\| x_i - z_p\| }[/math], and [math]\displaystyle{ d_p=\| x_j - z_q\| }[/math]. Let the kernel function [math]\displaystyle{ k(.,.) }[/math] satisfy [math]\displaystyle{ k(x,y) − k(x,z) \le \eta \|y-z\| }[/math], and [math]\displaystyle{ c = m\times \max k(.,.) }[/math]. Then the reconstructed similarity [math]\displaystyle{ K_{ij} }[/math] and the pqth entry of the W will have the following relation

[math]\displaystyle{ |K-W| \le \sqrt{m}\eta(cd_p+cd_q+\sqrt{m}\eta d_p d_p)\|W^{\dagger}\|_F }[/math]

The inequality provide a bound on the distance between [math]\displaystyle{ K_{ij} }[/math] and [math]\displaystyle{ W_{pq} }[/math], this means that similarity matrix [math]\displaystyle{ W }[/math] can serve as a dictionary kernel whose entries are extrapolated bilaterally onto any pairs of samples [math]\displaystyle{ (x_i, x_j) }[/math] according to the proximity relation between landmark points and samples.

This kernel extrapolation view of the Nystrom method inspires us to generalize it to handle prior constraint in learning a low-rank kernel.

Including Side Information

The quality of the dictionary can impact the whole kernel matrix, in the original Nystrom method the dictionary kernel W is simply the computed as the pairwise similarity between landmark points. The intuition is to incorporate side information to construct a appropriate dictionary kernel. Suppose we are given a set of labeled and unlabeled samples(semi-supervised learning), the task is to learn (the inverse of) a new dictionary kernel, denoted by S , subject to two considerations. First, the reconstructed kernel [math]\displaystyle{ ESE^T }[/math] should preserve the structure of the original kernel K, since K encodes important pairwise relation between samples (incorporate unsupervised information). Second, the reconstructed kernel based on the labelled samples, [math]\displaystyle{ E_lSE^T_l }[/math], should be consistent with the given side information (incorporate supervised information).

To satisfy both criteria the paper arrives at the following optimization problem:

[math]\displaystyle{ min_{S\in R^{m \times m}} \lambda \|S-S_0\|^2_F+\|E_lSE_l^T - K^*_l\|^2_F }[/math] s.t. [math]\displaystyle{ S \succeq 0. }[/math]

Here, [math]\displaystyle{ S_0 = W^{\dagger} }[/math] and [math]\displaystyle{ W^{\dagger} }[/math] is from the standard Nystrom method. [math]\displaystyle{ K_l^* }[/math] is the ideal kernel, it equals to 1 when [math]\displaystyle{ x_i }[/math] and [math]\displaystyle{ x_j }[/math] are in the same class, and 0 otherwise. The first term of the objective function corresponds to the first consideration and the second term of the objective corresponds to the second. The reason for choosing the euclidean norm is that minimizing the Euclidian distance is related to maximizing the alignment. We can then use the normalized kernel alignment score afterwards as an in- dependent measure to choose the hyper-parameter [math]\displaystyle{ \lambda }[/math].

Side Information as Grouping Constraints

The side information can used as grouping constraint and then be incorporated into the objective function. Given a set of grouping constraints denoted by [math]\displaystyle{ I }[/math]. Let [math]\displaystyle{ \Chi_I }[/math] be the subset of samples with such constraints. Then define [math]\displaystyle{ T\in R^{\Chi_I\times\Chi_I} }[/math] with [math]\displaystyle{ T_{ij} = 1 }[/math] if [math]\displaystyle{ (x_i,x_j) \in \Chi_I }[/math] and 0 otherwise. Then the objective function becomes,

[math]\displaystyle{ min_{S\in R^{m \times m}} \lambda \|S-S_0\|^2_F+\|T\cdot (E_lSE_l^T) - K^*_l\|^2_F }[/math] s.t. [math]\displaystyle{ S \succeq 0. }[/math]

Optimization

Since both the objective function and the positive semidefinite constraints are convex, there exists a global optimal. The paper uses a gradient mapping strategy <ref name="Nemi1994"> Nemirovski, A. Efficient method in convex programming. Lecture Notes 1994.

</ref> to find the optimal solution. The method has a iterative gradient descent step and projection step. Given an initial solution [math]\displaystyle{ S^{(t)} }[/math] we update it by [math]\displaystyle{ S^{(t+1)} = S^{(t)}+ \eta^{(t)}\nabla_{S^{(t)}} }[/math] where [math]\displaystyle{ \nabla_{S ^{(t)}} }[/math] is the gradient of the objective function at [math]\displaystyle{ S }[/math].

[math]\displaystyle{ \nabla S = 2\lambda(S_0-S) +2E_l^T(E_lSE_l^T-K^*)E_l }[/math]

The step length [math]\displaystyle{ \eta^{(t)} }[/math]is determined by the Armijo Goldstein rule<ref name="Nemi1994">

Reference </ref>.

The descent step:

[math]\displaystyle{ B_{A}^*={\underset{B \succeq 0}{\text{arg min}}} \ tr({\nabla}_{S^{(t)}}B)+{\frac A 2} \| B-S^{(t)} \|_F^2 }[/math];

while [math]\displaystyle{ J(B_A^*)\gt J(S^{(t)})+tr({\nabla}_{S^(t)}(B_A^*-S^{(t)}))+{\frac A 2} \| B_A^*-S^{(t)} \|_F^2 }[/math]

[math]\displaystyle{ \ \qquad }[/math] [math]\displaystyle{ \ \qquad }[/math] Increase A by a constant times;

[math]\displaystyle{ \ \qquad }[/math] [math]\displaystyle{ \ \qquad }[/math] [math]\displaystyle{ B_{A}^*={\underset{B \succeq 0}{\text{arg min}}} \ tr({\nabla}_{S^{(t)}}B)+{\frac A 2} \| B-S^{(t)} \|_F^2 }[/math];

end while

finish descent step.

After the descent step, we project the iterate [math]\displaystyle{ S^{t+1)} }[/math] onto the set of positive semi-definite cones as follows

[math]\displaystyle{ S^{(t+1)} = U^{(t+1)}\Lambda_{+}^{(t+1)}(U^{(t+1)})^T }[/math]

where [math]\displaystyle{ U^{(t+1)} }[/math] and [math]\displaystyle{ \Lambda^{(t+1)} }[/math] are the eigenvectors and eigenvalues of [math]\displaystyle{ S^{(t+1)} }[/math].

Initialization

The paper proposes a closed-form initialization which helps us quickly locate the optimal solution.The idea is to drop the positive semi-definite constraint in the objective and compute the vanishing point of the gradient. which gives,

[math]\displaystyle{ \lambda S+E_l^TE_lSE_l^TE_l=E_l^TK_l^*E_l+\lambda S_0 }[/math]

Then we have,

[math]\displaystyle{ S+PSP^T = Q }[/math]

where

[math]\displaystyle{ P=\frac{1}{\sqrt{\lambda}}(E_l^TE_l) }[/math]
[math]\displaystyle{ Q=S_0+\frac{1}{\lambda}E_l^TK^*_lE_l }[/math]

Suppose the diagonalization of [math]\displaystyle{ P }[/math] is [math]\displaystyle{ P = U \Lambda U^T }[/math], and define [math]\displaystyle{ S=U \tilde SU^T }[/math], [math]\displaystyle{ Q= U\tilde QU^T }[/math], then it can be written as

[math]\displaystyle{ U\tilde SU^T + U\Lambda \tilde S \Lambda U^T = U \tilde QU^T }[/math]
[math]\displaystyle{ \tilde S + \Lambda\tilde S\Lambda^T = \tilde Q }[/math]

Since [math]\displaystyle{ \Lambda }[/math] is diagonal, this gives us [math]\displaystyle{ m^2 }[/math] equations

[math]\displaystyle{ \tilde S_{ij}+\Lambda_{ii}\Lambda_{jj} \tilde S_{ij} = Q_{ij}, 1 \le i, j \le m. }[/math]

Therefore we have a closed form solution of S, as [math]\displaystyle{ S=U \tilde S U^T }[/math] where [math]\displaystyle{ [ \tilde S]_{ij} = \frac{ \tilde Q_{ij}}{1+\Lambda_{ii}\Lambda_{jj}} }[/math]

After computing [math]\displaystyle{ S }[/math], we then project it onto the set of positive semi-definite cones. Such an initial solution can be deemed as the closest positive semi-definite matrix to the unconstrained version of the objective function.

Landmark Selection

Selection of landmark points [math]\displaystyle{ Z }[/math] in Nystrom method can greatly affect its performance. The authors used the k-mean based sampling scheme by Zhang & Kwok<ref> Zhang, K. and Kwok, J. Clustered Nystrom method for large scale manifold learning and dimension reduction. IEEE Transactions on Neural Networks 21:1576-1587, 2010 </ref> . In which, the authors first use k-mean clustering to group the data, and pick the centroids of each cluster as the landmarks for the Nystrom method.

Complexities

The space complexity of the proposed algorithm is [math]\displaystyle{ O(mn) }[/math], where n is sample size and m the number of landmark points. Computationally, it requires repeated eigenvalue decomposition of [math]\displaystyle{ m \times m }[/math] matrices, and a single multiplication between the [math]\displaystyle{ n \times m }[/math] extrapolation E and the [math]\displaystyle{ m \times m }[/math] dictionary kernel S. The overall complexity is [math]\displaystyle{ O(m^2n)+O(tlog(\mu_max)m^3) }[/math] where t is the number of gradient mapping iterations, and [math]\displaystyle{ \mu_{max} }[/math] is the maximum eigenvalue of the Hessian. The algorithm has linear time and space complexity.

Selecting Hyper-parameter

The hyper-parameter [math]\displaystyle{ \lambda }[/math] can be difficult to choose if the side information is limited. The authors propose a heuristic to choose it. The two residuals [math]\displaystyle{ S_0 -S }[/math] and [math]\displaystyle{ E_lSE_l^T - K_l^* }[/math] of the objective function are additive and requires a tradeoff parameters [math]\displaystyle{ \lambda }[/math]. The normalized kernel alignment (NKA) <ref name="Cortes2010"> Cortes, C., Mohri, M. and Rostamizadeh, A. Two stage learning kernel algorithm. International Conference on Machine Learning, 2010.

</ref> between kernel matrices,

[math]\displaystyle{ \rho[K_1, K_2] = \frac { \langle K_{1c} K_{2c}^T\rangle_F}{ \|K_{1c}\|_F \|K_{2c}\|_F } }[/math]

where [math]\displaystyle{ K_{1c} }[/math] is double-centralized [math]\displaystyle{ K_{1} }[/math]. The NKA score alway has magnitude that is smaller than 1 and it is independent of the scale of the solution is multiplicative by nature. Let [math]\displaystyle{ S(\lambda) }[/math] be the optimum of the objective function for a fixed [math]\displaystyle{ \lambda }[/math] then choose the best [math]\displaystyle{ \lambda }[/math] that maximize the following criterion:

[math]\displaystyle{ \lambda^* = \underset{\lambda\in G} {arg\,max}\rho[S(\lambda),S_0]\times\rho[E_lS(\lambda)E^T_l, K^*_l ] }[/math]

G is the set of candidate [math]\displaystyle{ \lambda }[/math] 's. The above criterion is an information criterion to measure the quality of solution, and it has several nice properties:

(1) It is scale invariant, so it does not need any additional tuning parameters.

(2). The first terms measures the closeness between [math]\displaystyle{ S }[/math] and [math]\displaystyle{ S_0 }[/math], related to unsupervised structures of kernel matrix; the second term is on the closeness between [math]\displaystyle{ E_lSE^T_l }[/math] and [math]\displaystyle{ K_l^* }[/math], related to side information. This criteria faithfully reflects what the objective function optimizes but numerically different.

(3). If the value of second term (the NKA) is high, it suggests it is a big chance that there exists a good predictor <ref name="Cortes2010"> Cortes, C., Mohri, M. and Rostamizadeh, A. Two stage learning kernel algorithm. International Conference on Machine Learning, 2010. </ref>.

(4). To obtain the best [math]\displaystyle{ \lambda }[/math] does not require the validation (or cross-validation), which is good for small sample problems

Experiments

The paper compares 7 algorithms on learning low-rank kernel: (1) Nystrom: standard Nstrom method (2) CSI: Choleskey with Side information <ref name="Bach2005"> Bach, F.R and Jordan, M.I. Kernel independent component analysis. International Conference of Machine Learning, 2005. </ref>; (3) Cluster: cluster kernel <ref name="Chapelle2003"> Chapelle, O., Weston, J., Scholkopf, B. Cluster kernels for semi-supervised learning. Advances in Neural Information Processing System 15, 2003. </ref>; (4) Spectral: non-parametric spectral graph kernel <ref name="Zhu2004"> Zhu, X., Kandola, J., Ghahbramani, Z., and Lafferty,J. Nonparametric transforms of graph kernels for semi-supervised learning. Advances in Neural Information Processing Systems 16, 2004 </ref>; (5) TSK: two stage kernel learning algorithm <ref name="Cortes2010"> Reference </ref>; (6) Breg: low-rank kernel learning with Bregman divergence <ref name="Kulis2009"> Kulis, B., Sustik, M.A.,and Dhillon, I.S. Low-rank kernel learning with bregman matrix divergences. Journal of Machine Learning Research, 10:341-376, 2009 </ref> (7) Proposed method.

The benchmark datasets from the SSL data set and the libsvm data. For each data set, the labelled data are picked randomly. Gaussian kernel [math]\displaystyle{ K(x_1, x_2) = exp(-\|x_1-x_2\|^2/b) }[/math] is used, the kernel width is chosen as the average pairwise squared distances between samples. Most algorithms can learn the [math]\displaystyle{ n \times n }[/math] low rank kernel matrix on labeled and unlabeled samples in the form of [math]\displaystyle{ K = A^TA }[/math], which is fed into SVM for classification. The resultant problem will be a linear SVM using A as training/testing samples, note that A does not need to be known, as long as we know the form of K.

On most data sets, algorithms using labels in kernel learning outperform the baseline algorithm (method 1), indicating the value of side information. The proposed approach is competitive with stat-of-the-art kernel learning algorithms with less memory consumption.

File:hssunTable1.png

Figure1 examines the alignment score used to choose the hyper-parameter λ in Figure 1, the score correlates nicely with the classification accuracy. File:hssunFigure1.png

References

<references />