# stat946w18/Spectral normalization for generative adversial network

## Contents

# Presented by

1. liu, wenqing

# Introduction

Generative adversarial networks(GANs)(Goodfellow et al., 2014) have been enjoying considerable success as a framework of generative models in recent years. The concept is to consecutively train the model distribution and the discriminator in turn, with the goal of reducing the difference between the model distribution and the target distribution measured by the best discriminator possible at each step of the training.

A persisting challenge challenge in the training of GANs is the performance control of the discriminator. When the support of the model distribution and the support of target distribution are disjoint, there exists a discriminator that can perfectly distinguish the model distribution from the target. (Arjovsky & Bottou, 2017). One such discriminator is produced in this situation, the training of the generator comes to complete stop, because the derivative of the so-produced discriminator with respect to the input turns out to be 0. This motivates us to introduce some form of restriction to the choice of discriminator.

In this paper, we propose a novel weight normalization method called *spectral normalization* that can stabilize the training of discriminator networks. Our normalization enjoys following favorable properties. In this study, we provide explanations of the effectiveness of spectral normalization against other regularization or normalization techniques.

# Model

Let us consider a simple discriminator made of a neural network of the following form, with the input x: [math] f(x,\theta) = W^{L+1}a_L(W^L(a_{L-1}(W^{L-1}(\cdots a_1(W^1x)\cdots)))) [/math] where [math] \theta:=W^1,\cdots,W^L, W^{L+1} [/math] is the learning parameters set, [math]W^l\in R^{d_l*d_{l-1}}, W^{L+1}\in R^{1*d_L} [/math], and [math]a_l [/math] is an element-wise non-linear activation function. The final output of the discriminator function is given by [math]D(x,\theta) = A(f(x,\theta)) [/math]. The standard formulation of GANs is given by [math]\min_{G}\max_{D}V(G,D)[/math] where min and max of G and D are taken over the set of generator and discriminator functions, respectively. The conventional form of [math]V(G,D) [/math] is given by [math]E_{x\sim q_{data}}[\log D(x)] + E_{x'\sim p_G}[\log(1-D(x')[/math] where [math]q_{data}[/math] is the data distribution and [math]p_G(x)[/math] is the model generator distribution to be learned through the adversarial min-max optimization. It is known that, for a fixed generator G, the optimal discriminator for this form of [math]V(G,D) [/math] is given by [math] D_G^{*}(x):=q_{data}(x)/(q_{data}(x)+p_G(x))[/math]. We search for the discriminator D from the set of K-lipshitz continuous functions, that is, [math] \arg\max_{||f||_{Lip}\le k}V(G,D)[/math], where we mean by [math] ||f||_{lip}[/math] the smallest value M such that [math] ||f(x)-f(x')||/||x-x'||\le M [/math] for any x,x', with the norm being the [math] l_2 [/math] norm. Our spectral normalization controls the Lipschitz constant of the discriminator function [math] f [/math] by literally constraining the spectral norm of each layer [math] g: h_{in}\rightarrow h_{out}[/math]. By definition, Lipschitz norm [math] ||g||_{Lip} [/math] is equal to [math] \sup_h\sigma(\nabla g(h)) [/math], where [math] \sigma(A) [/math] is the spectral norm of the matrix A, which is equivalent to the largest singular value of A. Therefore, for a linear layer [math] g(h)=Wh [/math], the norm is given by [math] ||g||_{Lip}=\sigma(W) [/math]. Observing the following bound:

[math] ||f||_{Lip}\le ||(h_L\rightarrow W^{L+1}h_{L})||_{Lip}*||a_{L}||_{Lip}*||(h_{L-1}\rightarrow W^{L}h_{L-1})||_{Lip}\cdots ||a_1||_{Lip}*||(h_0\rightarrow W^1h_0)||_{Lip}=\prod_{l=1}^{L+1}\sigma(W^l) *\prod_{l=1}^{L} ||a_l||_{Lip} [/math]

Our spectral normalization normalizes the spectral norm of the weight matrix W so that it satisfies the Lipschitz constraint [math] \sigma(W)=1 [/math]:

[math] \bar{W_{SN}}:= W/\sigma(W) [/math]

In summary, just like what weight normalization does, we reparameterize weight matrix [math] \bar{W_{SN}} [/math] as [math] W/\sigma(W) [/math] to fix the singular value of weight matrix. Now we can calculate the gradient of new parameter W by chain rule:

[math] \frac{\partial V(G,D)}{\partial W} = \frac{\partial V(G,D)}{\partial \bar{W_{SN}}}*\frac{\partial \bar{W_{SN}}}{\partial W} [/math]

[math] \frac{\partial \bar{W_{SN}}}{\partial W_{ij}} = \frac{1}{\sigma(W)}E_{ij}-\frac{1}{\sigma(W)^2}*\frac{\partial \sigma(W)}{\partial(W_{ij})}W=\frac{1}{\sigma(W)}E_{ij}-\frac{[u_1v_1^T]_{ij}}{\sigma(W)^2}W=\frac{1}{\sigma(W)}(E_{ij}-[u_1v_1^T]_{ij}\bar{W_{SN}})[/math]

where [math] E_{ij} [/math] is the matrix whose (i,j)-th entry is 1 and zero everywhere else, and [math] u_1, v_1[/math] are respectively the first left and right singular vectors of W.

# Spectral Normalization VS Other Regularization Techniques

The weight normalization introduced by Salimans & Kingma(2016) is a method that normalizes the [math] l_2 [/math] norm of each row vector in the weight matrix. Mathematically it is equivalent to require the weight by the weight normalization [math] \bar{W_{WN}} [/math]:

[math] \sigma_1(\bar{W_{WN}})^2+\cdots+\sigma_T(\bar{W_{WN}})^2=d_0, \text{where } T=\min(d_i,d_0) [/math] where [math] \sigma_t(A) [/math] is a t-th singular value of matrix A.

Note, if [math] \bar{W_{WN}} [/math] is the weight normalized matrix of dimension [math] d_i*d_0 [/math], the norm [math] ||\bar{W_{WN}}h||_2 [/math] for a fixed unit vector [math] h [/math] is maximized at [math] ||\bar{W_{WN}}h||_2 \text{ when } \sigma_1(\bar{W_{WN}})=\sqrt{d_0} \text{ and } \sigma_t(\bar{W_{WN}})=0, t=2, \cdots, T [/math] which means that [math] \bar{W_{WN}} [/math] is of rank one. In order to retain as much norm of the input as possible and hence to make the discriminator more sensitive, one would hope to make the norm of [math] \bar{W_{WN}}h [/math] large. For weight normalization, however, this comes at hte cost of reducing the rank and hence the number of features to be used for the discriminator. Thus, there is a conflict of interests between weight normalization and our desire to use as many features as possible to distinguish the generator distribution from the target distribution. The former interest often reigns over the other in many cases, inadvertently diminishing the number of features to be used by the discriminators. Consequently, the algorithm would produce a rather arbitrary model distribution that matches the target distribution only at select few features.

Brock et al. (2016) introduced orthonormal regularization on each weight to stabilize the training of GANs. In their work, Brock et al.(2016) augmented the adversarial objective function by adding the following term:

[math] ||W^TW-I||^2_F [/math]

While this seems to serve the same purpose as spectral normalization, orthonormal regularization are mathematically quite different from our spectral normalization because the orthonormal regularization destroys the information about the spectrum by setting all the singular values to one. On the other hand, spectral normalization only scales the spectrum so that its maximum will be one.

Gulrajani et al. (2017) used gradient penalty method in combination with WGAN. In their work, they placed K-Lipschitz constant on the discriminator by augmenting the objective function with the regularizer that rewards the function for having local 1-Lipschitz constant(i.e [math] ||\nabla_{\hat{x}} f ||_2 = 1 [/math]) at discrete sets of points of the form [math] \hat{x}:=\epsilon \tilde{x} + (1-\epsilon)x [/math] generated by interpolating a sample [math] \tilde{x} [/math] from generative distribution and a sample [math] x [/math] from the data distribution. This approach has an obvious weakness of being heavily dependent on the support of the current generative distribution. Moreover, WGAN-GP requires more computational cost than our spectral normalization with single -step power iteration, because the computation of [math] ||\nabla_{\hat{x}} f ||_2 [/math] requires one whole round of forward and backward propagation.

# Experimental settings and results

# Experimental settings and results

## Objective function

For all methods other than WGAN-GP, we use [math] V(G,D) := E_{x\sim q_{data}(x)}[\log D(x)] + E_{z\sim p(z)}[\log (1-D(G(z)))][/math] to update D, for the updates of G, use [math] -E_{z\sim p(z)}[\log(D(G(z)))] [/math]. Alternatively, test performance of the algorithm with so-called hinge loss, which is given by [math] V_D(\hat{G},D)= E_{x\sim q_{data}(x)}[\min(0,-1+D(x))] + E_{z\sim p(z)}[\min(0,-1-D(\hat{G}(z)))] [/math]

[math] V_G(G,\hat{D})=-E_{z\sim p(z)}[\hat{D}(G(z))] [/math]

# Criticisms

There is some concern regarding whether this model will be able to provide a truly scalable solution to MT. In particular, it is not obvious that this model will be able to sufficiently scale to long sentences as is evident in the reported results. The model is severely limited, in general, by working only in the absence of infrequent words. These theoretical limitations alongside sparse experimental results give rise to skepticism about the overarching validity of the model.

# Source

Sutskever, I. Vinyals, O. & Le. Q. V. Sequence to sequence learning with neural networks. In Proc. Advances in Neural Information Processing Systems 27 3104–3112 (2014). <references />