stat946w18/Spectral normalization for generative adversial network: Difference between revisions
(→Model) |
|||
Line 32: | Line 32: | ||
where <math> E_{ij} </math> is the matrix whose (i,j)-th entry is 1 and zero everywhere else, and <math> u_1, v_1</math> are respectively the first left and right singular vectors of W. | where <math> E_{ij} </math> is the matrix whose (i,j)-th entry is 1 and zero everywhere else, and <math> u_1, v_1</math> are respectively the first left and right singular vectors of W. | ||
= | = Spectral Normalization VS Other Regularization Techniques = | ||
The weight normalization introduced by Salimans & Kingma(2016) is a method that normalizes the <math> l_2 </math> norm of each row vector in the weight matrix. Mathematically it is equivalent to require the weight by the weight normalization <math> \bar{W_{WN}} </math>: | |||
== | <math> \sigma_1(\bar{W_{WN}})^2+\cdots+\sigma_T(\bar{W_{WN}})^2=d_0, \text{where} T=\min(d_i,d_0) </math> where <math> \sigma_t(A) </math> is a t-th singular value of matrix A. | ||
</ | |||
< | |||
</ | |||
Note, if <math> \bar{W_{WN}} </math> is the weight normalized matrix of dimension <math> d_i*d_0 </math>, the norm <math> ||\bar{W_{WN}}h||_2 </math> for a fixed unit vector <math> h </math> is maximized at <math> </math> | |||
= More Formulations of Recurrent Neural Networks = | = More Formulations of Recurrent Neural Networks = |
Revision as of 14:38, 23 February 2018
Presented by
1. liu, wenqing
Introduction
Generative adversarial networks(GANs)(Goodfellow et al., 2014) have been enjoying considerable success as a framework of generative models in recent years. The concept is to consecutively train the model distribution and the discriminator in turn, with the goal of reducing the difference between the model distribution and the target distribution measured by the best discriminator possible at each step of the training.
A persisting challenge challenge in the training of GANs is the performance control of the discriminator. When the support of the model distribution and the support of target distribution are disjoint, there exists a discriminator that can perfectly distinguish the model distribution from the target. (Arjovsky & Bottou, 2017). One such discriminator is produced in this situation, the training of the generator comes to complete stop, because the derivative of the so-produced discriminator with respect to the input turns out to be 0. This motivates us to introduce some form of restriction to the choice of discriminator.
In this paper, we propose a novel weight normalization method called spectral normalization that can stabilize the training of discriminator networks. Our normalization enjoys following favorable properties. In this study, we provide explanations of the effectiveness of spectral normalization against other regularization or normalization techniques.
Model
Let us consider a simple discriminator made of a neural network of the following form, with the input x: [math]\displaystyle{ f(x,\theta) = W^{L+1}a_L(W^L(a_{L-1}(W^{L-1}(\cdots a_1(W^1x)\cdots)))) }[/math] where [math]\displaystyle{ \theta:=W^1,\cdots,W^L, W^{L+1} }[/math] is the learning parameters set, [math]\displaystyle{ W^l\in R^{d_l*d_{l-1}}, W^{L+1}\in R^{1*d_L} }[/math], and [math]\displaystyle{ a_l }[/math] is an element-wise non-linear activation function. The final output of the discriminator function is given by [math]\displaystyle{ D(x,\theta) = A(f(x,\theta)) }[/math]. The standard formulation of GANs is given by [math]\displaystyle{ \min_{G}\max_{D}V(G,D) }[/math] where min and max of G and D are taken over the set of generator and discriminator functions, respectively. The conventional form of [math]\displaystyle{ V(G,D) }[/math] is given by [math]\displaystyle{ E_{x\sim q_{data}}[\log D(x)] + E_{x'\sim p_G}[\log(1-D(x') }[/math] where [math]\displaystyle{ q_{data} }[/math] is the data distribution and [math]\displaystyle{ p_G(x) }[/math] is the model generator distribution to be learned through the adversarial min-max optimization. It is known that, for a fixed generator G, the optimal discriminator for this form of [math]\displaystyle{ V(G,D) }[/math] is given by [math]\displaystyle{ D_G^{*}(x):=q_{data}(x)/(q_{data}(x)+p_G(x)) }[/math]. We search for the discriminator D from the set of K-lipshitz continuous functions, that is, [math]\displaystyle{ \arg\max_{||f||_{Lip}\le k}V(G,D) }[/math], where we mean by [math]\displaystyle{ ||f||_{lip} }[/math] the smallest value M such that [math]\displaystyle{ ||f(x)-f(x')||/||x-x'||\le M }[/math] for any x,x', with the norm being the [math]\displaystyle{ l_2 }[/math] norm. Our spectral normalization controls the Lipschitz constant of the discriminator function [math]\displaystyle{ f }[/math] by literally constraining the spectral norm of each layer [math]\displaystyle{ g: h_{in}\rightarrow h_{out} }[/math]. By definition, Lipschitz norm [math]\displaystyle{ ||g||_{Lip} }[/math] is equal to [math]\displaystyle{ \sup_h\sigma(\nabla g(h)) }[/math], where [math]\displaystyle{ \sigma(A) }[/math] is the spectral norm of the matrix A, which is equivalent to the largest singular value of A. Therefore, for a linear layer [math]\displaystyle{ g(h)=Wh }[/math], the norm is given by [math]\displaystyle{ ||g||_{Lip}=\sigma(W) }[/math]. Observing the following bound:
[math]\displaystyle{ ||f||_{Lip}\le ||(h_L\rightarrow W^{L+1}h_{L})||_{Lip}*||a_{L}||_{Lip}*||(h_{L-1}\rightarrow W^{L}h_{L-1})||_{Lip}\cdots ||a_1||_{Lip}*||(h_0\rightarrow W^1h_0)||_{Lip}=\prod_{l=1}^{L+1}\sigma(W^l) *\prod_{l=1}^{L} ||a_l||_{Lip} }[/math]
Our spectral normalization normalizes the spectral norm of the weight matrix W so that it satisfies the Lipschitz constraint [math]\displaystyle{ \sigma(W)=1 }[/math]:
[math]\displaystyle{ \bar{W_{SN}}:= W/\sigma(W) }[/math]
In summary, just like what weight normalization does, we reparameterize weight matrix [math]\displaystyle{ \bar{W_{SN}} }[/math] as [math]\displaystyle{ W/\sigma(W) }[/math] to fix the singular value of weight matrix. Now we can calculate the gradient of new parameter W by chain rule:
[math]\displaystyle{ \frac{\partial V(G,D)}{\partial W} = \frac{\partial V(G,D)}{\partial \bar{W_{SN}}}*\frac{\partial \bar{W_{SN}}}{\partial W} }[/math]
[math]\displaystyle{ \frac{\partial \bar{W_{SN}}}{\partial W_{ij}} = \frac{1}{\sigma(W)}E_{ij}-\frac{1}{\sigma(W)^2}*\frac{\partial \sigma(W)}{\partial(W_{ij})}W=\frac{1}{\sigma(W)}E_{ij}-\frac{[u_1v_1^T]_{ij}}{\sigma(W)^2}W=\frac{1}{\sigma(W)}(E_{ij}-[u_1v_1^T]_{ij}\bar{W_{SN}}) }[/math]
where [math]\displaystyle{ E_{ij} }[/math] is the matrix whose (i,j)-th entry is 1 and zero everywhere else, and [math]\displaystyle{ u_1, v_1 }[/math] are respectively the first left and right singular vectors of W.
Spectral Normalization VS Other Regularization Techniques
The weight normalization introduced by Salimans & Kingma(2016) is a method that normalizes the [math]\displaystyle{ l_2 }[/math] norm of each row vector in the weight matrix. Mathematically it is equivalent to require the weight by the weight normalization [math]\displaystyle{ \bar{W_{WN}} }[/math]:
[math]\displaystyle{ \sigma_1(\bar{W_{WN}})^2+\cdots+\sigma_T(\bar{W_{WN}})^2=d_0, \text{where} T=\min(d_i,d_0) }[/math] where [math]\displaystyle{ \sigma_t(A) }[/math] is a t-th singular value of matrix A.
Note, if [math]\displaystyle{ \bar{W_{WN}} }[/math] is the weight normalized matrix of dimension [math]\displaystyle{ d_i*d_0 }[/math], the norm [math]\displaystyle{ ||\bar{W_{WN}}h||_2 }[/math] for a fixed unit vector [math]\displaystyle{ h }[/math] is maximized at [math]\displaystyle{ }[/math]
More Formulations of Recurrent Neural Networks
The standard RNN is formalized as follows
- [math]\displaystyle{ \,h_t=\tanh(W_{hx}x_t+W_{hh}h_{t-1}+b_h) }[/math]
- [math]\displaystyle{ \,o_t=W_{oh}h_t+b_o }[/math]
Given sequence of input vectors [math]\displaystyle{ \,(x_1,\cdots,x_{T}) }[/math], the RNN computes a sequence of hidden states [math]\displaystyle{ \,(h_1,\cdots,h_{T}) }[/math] and a sequence of output [math]\displaystyle{ \,(o_1,\cdots,o_{T}) }[/math] by iterating the above equations. [math]\displaystyle{ \,W_{hx} }[/math] is the input to hidden weight matrix, [math]\displaystyle{ \,W_{hh} }[/math] is the hidden to hidden weight matrix, [math]\displaystyle{ \,W_{oh} }[/math] is the hidden to output weight matrix. Vector [math]\displaystyle{ \,b_{h} }[/math] and [math]\displaystyle{ \,b_{o} }[/math] are the biases. When t=1, the undefined [math]\displaystyle{ \,W_{hh}h_{t-1} }[/math] is replace with a special initial bias vector, [math]\displaystyle{ \,h_{init} }[/math].
It may seem to train RNNs with gradient descent, but in reality, gradient decays exponentially as it is backpropagated through time. The relation between parameter and dynamics of the RNN is highly unstable, which makes gradient descent ineffective. Thus, it argues that RNN can not learn long-range temporal dependencies when gradient descent is used for training. A good way to deal with inability of gradient descent to learn long-range temporal structure in RNN is known as "Long-Short Term memory". (http://www.cs.utoronto.ca/~ilya/pubs/2011/LANG-RNN.pdf)
There are different variants of LSTM<ref name=grave> </ref><ref> Gers, Felix, and Jürgen Schmidhuber. "Recurrent nets that time and count." Neural Networks, 2000. IJCNN 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on. Vol. 3. IEEE, 2000. </ref><ref> Cho, Kyunghyun, et al. "Learning phrase representations using rnn encoder-decoder for statistical machine translation." arXiv preprint arXiv:1406.1078 (2014). </ref> other than the original one proposed by Hochreiter et al.<ref name=lstm> </ref> Greff et al. compare the performance of some different popular variants in their work<ref> Greff, Klaus, et al. "LSTM: A Search Space Odyssey." arXiv preprint arXiv:1503.04069 (2015). </ref> and draw the conclusion that they are about the same. While Jozefowicz, et al. suggest that some architecture can perform better than LSTM on certain tasks<ref> Jozefowicz, Rafal, Wojciech Zaremba, and Ilya Sutskever. "An Empirical Exploration of Recurrent Network Architectures." Proceedings of the 32nd International Conference on Machine Learning (ICML-15). 2015. </ref>.
Criticisms
There is some concern regarding whether this model will be able to provide a truly scalable solution to MT. In particular, it is not obvious that this model will be able to sufficiently scale to long sentences as is evident in the reported results. The model is severely limited, in general, by working only in the absence of infrequent words. These theoretical limitations alongside sparse experimental results give rise to skepticism about the overarching validity of the model.
Source
Sutskever, I. Vinyals, O. & Le. Q. V. Sequence to sequence learning with neural networks. In Proc. Advances in Neural Information Processing Systems 27 3104–3112 (2014). <references />