stat946w18/IMPROVING GANS USING OPTIMAL TRANSPORT: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 35: Line 35:


===Sinklhorn Distance===
===Sinklhorn Distance===
Genevay et al. (2017) proposed to use the primal formulation of optimal transport instead of the dual formulation to generative modeling. They introduced Sinkhorn distance which is a smoothed generalization of the Earth Mover distance.
Genevay et al. (2017) proposed to use the primal formulation of optimal transport instead of the dual formulation to generative modeling. They introduced Sinkhorn distance which is a smoothed generalization of the Wasserstein distance.
[[File: equation4.png|600px]]
[[File: equation4.png|600px]]


It introduced a constant <math> \beta </math> to restricted the entropy of the joint distribution <math> \prod_{\beta} (p,g) </math>. This distanc could generalize to approximate the mini-batches of data <math> X ,Y</math> with <math> K </math> vectors of <math> x, y</math>. The <math> i, j </math> th entry of the cost matrix <math> C </math> can be interpreted as the cost it needs to transport the <math> x_i </math> in mini-batch X to the <math> y_i </math> in mini-batch <math>Y </math>. The resulting distance will be:
It introduced entropy restriction (<math> \beta </math>) to the joint distribution <math> \prod_{\beta} (p,g) </math>. This distance could be generalized to approximate the mini-batches of data <math> X ,Y</math> with <math> K </math> vectors of <math> x, y</math>. The <math> i, j </math> th entry of the cost matrix <math> C </math> can be interpreted as the cost it needs to transport the <math> x_i </math> in mini-batch X to the <math> y_i </math> in mini-batch <math>Y </math>. The resulting distance will be:


[[File: equation5.png|550px]]
[[File: equation5.png|550px]]


where  <math> M </math> is a <math> K \times K </math> matrix, each row of  <math> M </math> is a joint distribution of  <math> \gamma (x,y) </math> with positive entries. The summmation of rows or columns of <math> M </math> is always equal to 1. Sinkhorn AutoDiff is the methods that minimiztion over M proposed by Genevay et al. (2017b)
where  <math> M </math> is a <math> K \times K </math> matrix, each row of  <math> M </math> is a joint distribution of  <math> \gamma (x,y) </math> with positive entries. The summmation of rows or columns of <math> M </math> is always equal to 1.  


This mini-batch Sinkhorn distance is not only fully tractable but also solve the instability problem of GANs. However, for the fixed mini-batch size, the gradients in this equation are biased.  
This mini-batch Sinkhorn distance is not only fully tractable but also capable of solving the instability problem of GANs. However, it is not a valid metric over probability distribution when taking the expectation of <math> \mathcal{W}_{c} </math> and the gradients are biased when the mini-batch size is fixed.


===Energy Distance (Cramer Distance)===
===Energy Distance (Cramer Distance)===

Revision as of 03:25, 14 March 2018

Introduction

Generative Adversarial Networks (GANs) are powerful generative models. A GAN model consists of a generator and a discriminator or critic. The generator is a neural network which is trained to generate data having a distribution matched with the distribution of the real data. The critic is also a neural network, which is trained to separate the generated data from the real data. A loss function that measures the distribution distance between the generated data and the real one is important to train the generator.

Optimal transport theory evaluates the distribution distance based on metric, which provides another method for generator training. The main advantage of optimal transport theory over the distance measurement in GAN is its closed form solution for having a tractable training process. But the theory might also result in inconsistency in statistical estimation due to the given biased gradients if the mini-batches method is applied.

This paper presents a variant GANs named OT-GAN, which incorporates a discriminative metric called 'MIni-batch Energy Distance' into its critic in order to overcome the issue of biased gradients.

GANs and Optimal Transport

Generative Adversarial Nets

Original GAN was firstly reviewed. The objective function of the GAN:

The goal of GANs is to train the generator g and the discriminator d finding a pair of (g,d) to achieve Nash equilibrium. However, it could cause failure of converging since the generator and the discriminator are trained based on gradient descent techniques.

Wasserstein Distance (Earth-Mover Distance)

In order to solve the problem of convergence failure, Arjovsky et. al. (2017) suggested Wasserstein distance (Earth-Mover distance) based on the optimal transport theory.

where [math]\displaystyle{ \prod (p,g) }[/math] is the set of all joint distributions [math]\displaystyle{ \gamma (x,y) }[/math] with marginals [math]\displaystyle{ p(x) }[/math] (real data), [math]\displaystyle{ g(y) }[/math] (generated data). [math]\displaystyle{ c(x,y) }[/math] is a cost function and the Euclidean distance was used by Arjovsky et. al. in the paper.

The Wasserstein distance can be considered as moving the minimum amount of points between distribution [math]\displaystyle{ g(y) }[/math] and [math]\displaystyle{ p(x) }[/math] such that the generator distribution [math]\displaystyle{ g(y) }[/math] is similar to the real data distribution [math]\displaystyle{ p(x) }[/math].

Consider that solving the Wasserstein distance is usually not possible, the proposed Wasserstein GAN (W-GAN) provides an estimated solution by switching the optimal transport problem into dual formulation using a set of 1-Lipschitz functions. A neural network can then be used to obtain an estimation.

W-GAN solves the unstable training process of original GAN and it can solve the optimal transport problem approximately, but it is still intractable.


The training process becomes traceable and a fully connected multi-layer network is sufficient for the training. However, the calculation of the Wasserstein distance is still an approximation and we can not have a fine-optimized critic.

Sinklhorn Distance

Genevay et al. (2017) proposed to use the primal formulation of optimal transport instead of the dual formulation to generative modeling. They introduced Sinkhorn distance which is a smoothed generalization of the Wasserstein distance.

It introduced entropy restriction ([math]\displaystyle{ \beta }[/math]) to the joint distribution [math]\displaystyle{ \prod_{\beta} (p,g) }[/math]. This distance could be generalized to approximate the mini-batches of data [math]\displaystyle{ X ,Y }[/math] with [math]\displaystyle{ K }[/math] vectors of [math]\displaystyle{ x, y }[/math]. The [math]\displaystyle{ i, j }[/math] th entry of the cost matrix [math]\displaystyle{ C }[/math] can be interpreted as the cost it needs to transport the [math]\displaystyle{ x_i }[/math] in mini-batch X to the [math]\displaystyle{ y_i }[/math] in mini-batch [math]\displaystyle{ Y }[/math]. The resulting distance will be:

where [math]\displaystyle{ M }[/math] is a [math]\displaystyle{ K \times K }[/math] matrix, each row of [math]\displaystyle{ M }[/math] is a joint distribution of [math]\displaystyle{ \gamma (x,y) }[/math] with positive entries. The summmation of rows or columns of [math]\displaystyle{ M }[/math] is always equal to 1.

This mini-batch Sinkhorn distance is not only fully tractable but also capable of solving the instability problem of GANs. However, it is not a valid metric over probability distribution when taking the expectation of [math]\displaystyle{ \mathcal{W}_{c} }[/math] and the gradients are biased when the mini-batch size is fixed.

Energy Distance (Cramer Distance)

In order to solve the above problem, Energy Distance is proposed by Bellemare et al. (2017):

where [math]\displaystyle{ x, x' }[/math] are independent samples from data distribution [math]\displaystyle{ p }[/math] and [math]\displaystyle{ y, y' }[/math] are independent samples from the generator distribution [math]\displaystyle{ g }[/math]. Cramer GAN is the method that by minimizing the ED distance metric to training the generator.

MINI-BATCH ENERGY DISTANCE

Salimans et al. (2016) mentioned that comparing to use distributions over individual images, mini-batch GAN is more powerful when use the distributions over mini-batches [math]\displaystyle{ g(X), p(X) }[/math]. The distance measure is generated for mini-batches.

GENERALIZED ENERGY DISTANCE

The generalized energy distance allowed to use non-Euclidean distance functions d. It is also valid for mini-batches.

where [math]\displaystyle{ X, X' }[/math] are independent samples from data distribution [math]\displaystyle{ p }[/math] and [math]\displaystyle{ Y, Y' }[/math] are independent samples from the generator distribution [math]\displaystyle{ g }[/math]. As long as [math]\displaystyle{ d }[/math] is a metric, [math]\displaystyle{ D_{GED}(p,g) }[/math] is a metric. Thus, by the defination of metric, [math]\displaystyle{ D(p,g) \geq 0, }[/math] and [math]\displaystyle{ D(p,g)=0 }[/math] iff [math]\displaystyle{ p=g }[/math].

MINI-BATCH ENERGY DISTANCE

Authors proposed a new metric, Mini-batch Energy Distance. It is a distance metric that combines the energy distance with primal form of optimal tranport over mini-batch distributions [math]\displaystyle{ g(Y) }[/math] and [math]\displaystyle{ p(X) }[/math]. Authors used entropy-regularized Sinkhorn distance as the metric d. Inside the generalized energy distance, the Sinkhorn distance is a valid metric between each mini-batches.

where [math]\displaystyle{ X, X' }[/math] are independent sampled mini-batches from data distribution [math]\displaystyle{ p }[/math] and [math]\displaystyle{ Y, Y' }[/math]are independent mini-batches from the generator distribution [math]\displaystyle{ g }[/math]. By adding the [math]\displaystyle{ - \mathcal{W}_c (Y,Y') }[/math] [math]\displaystyle{ \mathcal{W}_c (X,Y) }[/math] to equation (5) and using enregy distance, the objective becomes statistically consistent and mini-batch gradients are unbiased.

OPTIMAL TRANSPORT GAN (OT-GAN)

A well definded cost function could effect the statistical efficency. A commen choice of c, such as Euclidean distance, shows a generally pooly performance in highly dimensions. Authors suggested to use cosine distance between vectors [math]\displaystyle{ v_\eta (x) }[/math] and [math]\displaystyle{ v_\eta (y) }[/math]. The [math]\displaystyle{ v_\eta }[/math] is a deep neural network which maps the mini- batch images into a learned latent space.

where the [math]\displaystyle{ v_\eta (x) }[/math] is choosed to maximize the resulting minibatch energy distance.

Unlike the standard practice in GANs, authors trained a good generator over the critic and also keep the cost function from asigning 0 cost to the non-identical regions in image space between the generated data and the real data. As long as the cost function is not degenerated, the generator in OT-GAN has a well defined and statistically consistent objective even when the critic is not updated.

The algorithm is defined below. The backpropagation is not used in the algorithm due to the envelope theorem. Stochastic gradient descent is used as the optimization method.


EXPERIMENTS

MIXTURE OF GAUSSIAN DATASET

OT-GAN has a statistically consistent objective compare to DC-GAN, such that the generator does not updated to the wrong direction even when the cost function gives very bad signal to the generator. In order to prove the consistency advatange, the authors compared the OT-GAN to the original GAN loss (DAN-S) on a simple task. It is to recover all 8 modes from mixed 8 Gaussians in where the means are located in a circle. MLP with RLU activation functions are used in this task. The critic is only updated for 15K iterations. THe generator distribution is observed for another 25K iterations. The results shows that the original GAN experience the model collapse after fixing the discriminator white the OT-GAN recovered all 8 modes from the mixed gaussian data.

CIFAR-10

In this section, CIFAR-10 is used to inspect the effect of batch-size on the model training and image quality. OT_GAN and other 4 methods are compared in here using "inception score" as the comparison critiera. Figure 3 shows the inceptions score changes over the iteration as the batch size increasing from 200 to 8000. The larger batch size leads a more stable model with a larger value of inception score. However, it also requires for a high performance computational environment. The sample quality across all 5 methods are showned in Table 1 where the OT_GAN has the best score compare to others.

IMAGENET DOGS

In order to investigate the OT-GAN performance in high quality image, the dog subset of ImageNet (128*128) is used to train the model. Figure 6 shows that OT-GAN produces less nonsensical images and it has a higher inception score compare to the DCGAN.

CONDITIONAL GENERATION OF BIRDS

OT-GAN in this section is compare to the models on text-to-image generation which is used to test the performance of OT-GAN on conditional image synthesis. Table 2 shows that the propsed more recieve the highest inception socre compare to other 3 models.

CONCLUSION

CRITIQUE