# stat946w18/AmbientGAN: Generative Models from Lossy Measurements

Jump to: navigation, search

# Introduction

Generative models are powerful tools to concisely represent the structure in large datasets. Generative Adversarial Networks operate by simulating complex distributions but training them requires access to large amounts of high quality data. Often, we only have access to noisy or partial observations, which will, from here on, be referred to as measurements of the true data. If we know the measurement function and would like to train a generative model for the true data, there are several ways to continue which have varying degrees of success. We will use noisy MNIST data as an illustrative example, and show the results of 1. ignoring the problem, 2. trying to recover the lost information, and 3. using AmbientGAN as a way to recover the true data distribution. Suppose we only see MNIST data that has been run through a Gaussian kernel (blurred) with some noise from a $N(0, 0.5^2)$ distribution added to each pixel:

### Ignore the problem

Train a generative model directly on the measured data. This will obviously be unable to generate the true distribution before measurement has occurred.

### Try to recover the information lost

Works better than ignoring the problem but depends on how easily the measurement function can be inverted.

### AmbientGAN

Ashish Bora, Eric Price and Alexandros G. Dimakis propose AmbientGAN as a way to recover the true underlying distribution from measurements of the true data. AmbientGAN works by training a generator which attempts to have the measurements of the output it generates fool the discriminator. The discriminator must distinguish between real and generated measurements. This paper is published in ICLR 2018.

## Contributions

The paper makes the following contributions:

### Theoretical Contribution

The authors show that the distribution of measured images uniquely determines the distribution of original images. This implies that a pure Nash equilibrium for the GAN game must find a generative model that matches the true distribution. They show similar results for a dropout measurement model, where each pixel is set to zero with some probability p, and a random projection measurement model, where they observe the inner product of the image with a random Gaussian vector.

Also, the author listed a few theorems to support assumptions satisfied under Gaussian-Projection, Convolve+Noise and Block-Pixels measurement models, thus showing that that we can recover the true underlying distribution with the AmbientGAN framework. For example, the Gaussian theorem guarantees the uniqueness of underlying distribution. Finally by showing that this assumption is satisfied under Gaussian-Projection, Convolve+Noise and Block-Pixels measurement models, the author finally proved that can recover the true underlying distribution with the AmbientGAN framework.

### Empirical Contribution

The authors consider CelebA and MNIST dataset for which the measurement model is unknown and show that Ambient GAN recovers a lot of the underlying structure.

# Related Work

Currently there exist two distinct approaches for constructing neural network based generative models; they are autoregressive [4,5] and adversarial [6] based methods. The adversarial model has shown to be very successful in modeling complex data distributions such as images, 3D models, state action distributions and many more. This paper is related to the work in [7] where the authors create 3D object shapes from a dataset of 2D projections. This paper states that the work in [7] is a special case of the AmbientGAN framework where the measurement process creates 2D projections using weighted sums of voxel occupancies.

# Datasets and Model Architectures

We used three datasets for our experiments: MNIST, CelebA and CIFAR-10 datasets We briefly describe the generative models used for the experiments. For the MNIST dataset, we use two GAN models. The first model is a conditional DCGAN, while the second model is an unconditional Wasserstein GAN with gradient penalty (WGANGP). For the CelebA dataset, we use an unconditional DCGAN. For the CIFAR-10 dataset, we use an Auxiliary Classifier Wasserstein GAN with gradient penalty (ACWGANGP). For measurements with 2D outputs, i.e. Block-Pixels, Block-Patch, Keep-Patch, Extract-Patch, and Convolve+Noise, we use the same discriminator architectures as in the original work. For 1D projections, i.e. Pad-Rotate-Project, Pad-Rotate-Project-θ, we use fully connected discriminators. The architecture of the fully connected discriminator used for the MNIST dataset was 25-25-1 and for the CelebA dataset was 100-100-1.

# Model

For the following variables superscript $r$ represents the true distributions while superscript $g$ represents the generated distributions. Let $x$, represent the underlying space and $y$ for the measurement.

Thus, $p_x^r$ is the real underlying distribution over $\mathbb{R}^n$ that we are interested in. However if we assume that our (known) measurement functions, $f_\theta: \mathbb{R}^n \to \mathbb{R}^m$ are parameterized by $\Theta \sim p_\theta$, we can then observe $Y = f_\theta(x) \sim p_y^r$ where $p_y^r$ is a distribution over the measurements $y$.

Mirroring the standard GAN setup we let $Z \in \mathbb{R}^k, Z \sim p_z$ and $\Theta \sim p_\theta$ be random variables coming from a distribution that is easy to sample.

If we have a generator $G: \mathbb{R}^k \to \mathbb{R}^n$ then we can generate $X^g = G(Z)$ which has distribution $p_x^g$ a measurement $Y^g = f_\Theta(G(Z))$ which has distribution $p_y^g$.

Unfortunately, we do not observe any $X^g \sim p_x$ so we cannot use the discriminator directly on $G(Z)$ to train the generator. Instead we will use the discriminator to distinguish between the $Y^g - f_\Theta(G(Z))$ and $Y^r$. That is, we train the discriminator, $D: \mathbb{R}^m \to \mathbb{R}$ to detect if a measurement came from $p_y^r$ or $p_y^g$.

AmbientGAN has the objective function:

\begin{align} \min_G \max_D \mathbb{E}_{Y^r \sim p_y^r}[q(D(Y^r))] + \mathbb{E}_{Z \sim p_z, \Theta \sim p_\theta}[q(1 - D(f_\Theta(G(Z))))] \end{align}

where $q(.)$ is the quality function; for the standard GAN $q(x) = log(x)$ and for Wasserstein GAN $q(x) = x$.

As a technical limitation we require $f_\theta$ to be differentiable with respect to each input for all values of $\theta$.

With this set up we sample $Z \sim p_z$, $\Theta \sim p_\theta$, and $Y^r \sim U\{y_1, \cdots, y_s\}$ each iteration and use them to compute the stochastic gradients of the objective function. We alternate between updating $G$ and updating $D$.

# Empirical Results

The paper continues to present results of AmbientGAN under various measurement functions when compared to baseline models. We have already seen one example in the introduction: a comparison of AmbientGAN in the Convolve + Noise Measurement case compared to the ignore-baseline, and the unmeasure-baseline.

### Convolve + Noise

Additional results with the convolve + noise case with the celebA dataset. The AmbientGAN is compared to the baseline results with Wiener deconvolution. It is clear that AmbientGAN has superior performance in this case. The measurement is created using a Gaussian kernel and IID Gaussian noise, with $f_{\Theta}(x) = k*x + \Theta$, where $*$ is the convolution operation, $k$ is the convolution kernel, and $\Theta \sim p_{\theta}$ is the noise distribution.

Images undergone convolve + noise transformations (left). Results with Wiener deconvolution (middle). Results with AmbientGAN (right).

### Block-Pixels

With the block-pixels measurement function each pixel is independently set to 0 with probability $p$.

Measurements from the celebA dataset with $p=0.95$ (left). Images generated from GAN trained on unmeasured (via blurring) data (middle). Results generated from AmbientGAN (right).

### Block-Patch

A random 14x14 patch is set to zero (left). Unmeasured using-navier-stoke inpainting (middle). AmbientGAN (right).

### Pad-Rotate-Project-$\theta$

Results generated by AmbientGAN where the measurement function 0 pads the images, rotates it by $\theta$, and projects it on to the x axis. For each measurement the value of $\theta$ is known.

The generated images only have the basic features of a face and is referred to as a failure case in the paper. However the measurement function performs relatively well given how lossy the measurement function is.

For the Keep-Patch measurement model, no pixels outside a box are known and thus inpainting methods are not suitable. For the Pad-Rotate-Project-θ measurements, a conventional technique is to sample many angles, and use techniques for inverting the Radon transform . However, since only a few projections are observed at a time, these methods aren’t readily applicable hence it is unclear how to obtain an approximate inverse function shown below.

### Explanation of Inception Score

To evaluate GAN performance, the authors make use of the inception score, a metric introduced by Salimans et al.(2016). To evaluate the inception score on a datapoint, a pre-trained inception classification model (Szegedy et al. 2016) is applied to that datapoint, and the KL divergence between its label distribution conditional on the datapoint and its marginal label distribution is computed. This KL divergence is the inception score. The idea is that meaningful images should be recognized by the inception model as belonging to some class, and so the conditional distribution should have low entropy, while the model should produce a variety of images, so the marginal should have high entropy. Thus an effective GAN should have a high inception score.

### MNIST Inception

AmbientGAN was compared with baselines through training several models with different probability $p$ of blocking pixels. The plot on the left shows that the inception scores change as the block probability $p$ changes. All four models are similar when no pixels are blocked $(p=0)$. By the increase of the blocking probability, AmbientGAN models present a relatively stable performance and perform better than the baseline models. Therefore, AmbientGAN is more robust than all other baseline models.

The plot on the right reveals the changes in inception scores while the standard deviation of the additive Gaussian noise increased. Baselines perform better when the noise is small. By the increase of the variance, AmbientGAN models present a much better performance compare to the baseline models. Further AmbientGAN retains high inception scores as measurements become more and more lossy.

For 1D projection, Pad-Rotate-Project model achieved an inception score of 4.18. Pad-Rotate-Project-θ model achieved an inception score of 8.12, which is close to the score of vanilla GAN 8.99.

### CIFAR-10 Inception

AmbientGAN is faster to train and more robust even on more complex distributions such as CIFAR-10. Similar trends were observed on the CIFAR-10 data, and AmbientGAN maintains relatively stable inception score as the block probability was increased.

### Robustness To Measurement Model

In order to empirically gauge robustness to measurement modelling error, the authors used the block-pixels measurement model: the image dataset was computed with $p^* = 0.5$, and several versions of the model were trained, each using different values of blocking probability $p$. The inception scores were calculated and plotted as a function of $p$. This is shown on the left below:

The authors observe that the inception score peaks when the model uses the correct probability, but decreases smoothly as the probability moves away, demonstrating some robustness.

### Compressed Sensing

As described in Bora et al. (2017), generative models were found to outperform sparsity-based approaches in sensing. Using this knowledge, the generator from AmbientGAN can be tested against Lasso to determine the required measurements to minimize the reconstruction error. As shown on the right of Figure 16, AmbientGAN outperforms Lasso in a fraction of the number of measurements

# Theoretical Results

The theoretical results in the paper prove the true underlying distribution of $p_x^r$ can be recovered when we have data that comes from the Gaussian-Projection measurement, Fourier transform measurement and the block-pixels measurement. The do this by showing the distribution of the measurements $p_y^r$ corresponds to a unique distribution $p_x^r$. Thus even when the measurement itself is non-invertible the effect of the measurement on the distribution $p_x^r$ is invertible. Lemma 5.1 ensures this is sufficient to provide the AmbientGAN training process with a consistency guarantee. For full proofs of the results please see appendix A.

### Lemma 5.1

Let $p_x^r$ be the true data distribution, and $p_\theta$ be the distributions over the parameters of the measurement function. Let $p_y^r$ be the induced measurement distribution.

Assume for $p_\theta$ there is a unique probability distribution $p_x^r$ that induces $p_y^r$.

Then for the standard GAN model if the discriminator $D$ is optimal such that $D(\cdot) = \frac{p_y^r(\cdot)}{p_y^r(\cdot) + p_y^g(\cdot)}$, then a generator $G$ is optimal if and only if $p_x^g = p_x^r$.

### Theorems 5.2

For the Gussian-Projection measurement model, there is a unique underlying distribution $p_x^{r}$ that can induce the observed measurement distribution $p_y^{r}$.

### Theorems 5.3

Let $\mathcal{F} (\cdot)$ denote the Fourier transform and let $supp (\cdot)$ be the support of a function. Consider the Convolve+Noise measurement model with the convolution kernel $k$and additive noise distribution $p_\theta$. If $supp( \mathcal{F} (k))^{c}=\phi$ and $supp( \mathcal{F} (p_\theta))^{c}=\phi$, then there is a unique distribution $p_x^{r}$ that can induce the measurement distribution $p_y^{r}$.

### Theorems 5.4

Assume that each image pixel takes values in a finite set P. Thus $x \in P^n \subset \mathbb{R}^{n}$. Assume $0 \in P$, and consider the Block-Pixels measurement model with $p$ being the probability of blocking a pixel. If $p \lt 1$, then there is a unique distribution $p_x^{r}$ that can induce the measurement distribution $p_y^{r}$. Further, for any $\epsilon \gt 0, \delta \in (0, 1]$, given a dataset of $$s=\Omega \left( \frac{|P|^{2n}}{(1-p)^{2n} \epsilon^{2}} log \left( \frac{|P|^{n}}{\delta} \right) \right)$$ IID measurement samples from pry , if the discriminator D is optimal, then with probability $\geq 1 - \delta$ over the dataset, any optimal generator G must satisfy $d_{TV} \left( p^g_x , p^r_x \right) \leq \epsilon$, where $d_{TV} \left( \cdot, \cdot \right)$ is the total variation distance.

# Conclusion

Generative models are powerful tools, but constructing a generative model requires a large, high quality dataset of the distribution of interest. The authors show how to relax this requirement, by learning a distribution from a dataset that only contains incomplete, noisy measurements of the distribution. This allows for the construction of new generative models of distributions for which no high quality dataset exists.

# Future Research

One critical weakness of AmbientGAN is the assumption that the measurement model is known and that this $f_theta$ is also differentiable. In fact, when the measurement model is known, there's no obvious reason not to invert the noisy measurement first(as illustrated in the second approach). It would be nice to be able to train an AmbientGAN model when we have an unknown measurement model but also a small sample of unmeasured data, or at the very least to remove the differentiability restriction from $f_theta$.

A related piece of work is here. In particular, Algorithm 2 in the paper excluding the discriminator is similar to AmbientGAN.

# Open Source Code

An implementation of Ambient GAN can be found here: https://github.com/AshishBora/ambient-gan.

# References

1. https://openreview.net/forum?id=Hy7fDog0b
2. Salimans, Tim, et al. "Improved techniques for training gans." Advances in Neural Information Processing Systems. 2016.
3. Szegedy, Christian, et al. "Rethinking the inception architecture for computer vision." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016.
4. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv:1312.6114, 2013.
5. Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016a.
6. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural infor- mation processing systems, pp. 2672–2680, 2014.
7. Matheus Gadelha, Subhransu Maji, and Rui Wang. 3d shape induction from 2d views of multiple objects. arXiv preprint arXiv:1612.05872, 2016.
8. Ashish Bora, Ajil Jalal, Eric Price, and Alexandros G Dimakis. Compressed sensing using generative models. arXiv preprint arXiv:1703.03208, 2017.