# Difference between revisions of "stat946w18/IMPROVING GANS USING OPTIMAL TRANSPORT"

(→Introduction) |
(→Generative Adversarial Nets) |
||

Line 9: | Line 9: | ||

===Generative Adversarial Nets=== | ===Generative Adversarial Nets=== | ||

− | The objective function of the GAN: | + | Original GAN was firstly reviewed. The objective function of the GAN: |

[[File:equation1.png|700px]] | [[File:equation1.png|700px]] | ||

− | The goal of GANs is to train the generator g and the discriminator d | + | The goal of GANs is to train the generator g and the discriminator d finding a pair of (g,d) to achieve Nash equilibrium. However, it could cause failure of converging since the generator and the discriminator are trained based on gradient descent techniques. |

+ | |||

+ | In order to solve this problem, Arjovsky et. al. (2017) suggested Wasserstein distance (Earth-Mover distance) for solving the weakness of the original objective function based on the optimal transport theory. Consider that solving the Wasserstein distance is usually not possible, the proposed Wasserstein GAN (W-GAN) provides an estimated solution by switching the optimal transport problem into dual formulation using a set of Lipschitz functions. A neural network can then be used to obtain an estimation. | ||

+ | |||

+ | W-GAN solves the unstable training process of original GAN. The training process becomes traceable and a fully connected multi-layer network is sufficient for the training. However, the calculation of the Wasserstein distance is still an approximation and we can not have a fine-optimized critic. | ||

− | |||

===Earth-Mover Distance=== | ===Earth-Mover Distance=== | ||

[[File:equation2.png|600px]] | [[File:equation2.png|600px]] |

## Revision as of 02:04, 13 March 2018

## Contents

## Introduction

Generative Adversarial Networks (GANs) are powerful generative models. A GAN model consists of a generator and a discriminator or critic. The generator is a neural network which is trained to generate data having a distribution matched with the distribution of the real data. The critic is also a neural network, which is trained to separate the generated data from the real data. A loss function that measures the distribution distance between the generated data and the real one is important to train the generator.

Optimal transport theory evaluates the distribution distance based on metric, which provides another method for generator training. The main advantage of optimal transport theory over the distance measurement in GAN is its closed form solution for having a tractable training process. But the theory might also result in inconsistency in statistical estimation due to the given biased gradients if the mini-batches method is applied.

This paper presents a variant GANs named OT-GAN, which incorporates a discriminative metric called 'MIni-batch Energy Distance' into its critic in order to overcome the issue of biased gradients.

## GANs AND OPTIMAL TRANSPORT

### Generative Adversarial Nets

Original GAN was firstly reviewed. The objective function of the GAN:

The goal of GANs is to train the generator g and the discriminator d finding a pair of (g,d) to achieve Nash equilibrium. However, it could cause failure of converging since the generator and the discriminator are trained based on gradient descent techniques.

In order to solve this problem, Arjovsky et. al. (2017) suggested Wasserstein distance (Earth-Mover distance) for solving the weakness of the original objective function based on the optimal transport theory. Consider that solving the Wasserstein distance is usually not possible, the proposed Wasserstein GAN (W-GAN) provides an estimated solution by switching the optimal transport problem into dual formulation using a set of Lipschitz functions. A neural network can then be used to obtain an estimation.

W-GAN solves the unstable training process of original GAN. The training process becomes traceable and a fully connected multi-layer network is sufficient for the training. However, the calculation of the Wasserstein distance is still an approximation and we can not have a fine-optimized critic.