From Variational to Deterministic Autoencoders: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 11: Line 11:
== Motivation ==
== Motivation ==
The authors point to several drawbacks currently associated with VAE's including:
The authors point to several drawbacks currently associated with VAE's including:
- over-regularisation induced by the KL divergence term within the objective (Tolstikhin et al., 2017)
* over-regularisation induced by the KL divergence term within the objective (Tolstikhin et al., 2017)
- posterior collapse conjunction with powerful decoders (van den Oord et al., 2017)
* posterior collapse in conjunction with powerful decoders (van den Oord et al., 2017)
- increased variance in gradients caused by approximating expectations through sampling  
* increased variance of gradients caused by approximating expectations through sampling (Burda et al., 2015; Tucker et al., 2017)


These issues motivate the assessment of whether the variational framework adopted by VAE's is necessary for generative modelling and whether an alternative framework utlizing regularization of the decoder in place of random noise could produce superior generative results.
These issues motivate their consideration of alternatives to the variational framework adopted by VAE's.  


The authors are motivated to explore the use of regularisation to produce a smooth latent space where similar inputs are mapped to similar latent representation and for small variations in latent variables their decoded reconstruction vary only slightly.
Furthermore, the authors consider VAE's introduction of random noise  within the reparameterization  <math> z = \mu(x) +\sigma(x)\epsilon </math>  as having a regularization effect whereby it aids in learning a smoother latent space. This motivates their exploration of alternative regularisation methods for an auto-encoders that could be substituted in place of VAE's random noise injection, thus elimination the varational framework and its associated drawbacks.


== Model Architecture ==  
== Model Architecture ==  

Revision as of 01:55, 31 October 2020

Presented by

Partha Ghosh, Mehdi S. M. Sajjadi, Antonio Vergari, Michael Black, Bernhard Scholkopf

Introduction

This paper presents an alternative framework for generative modeling that is deterministic. They suggest that sampling from a stochastic encoder within a VAE can be interpreted as injecting noise into the input of a deterministic decoder and propose a framework for a regularized deterministic autoencoder (RAE) to generate samples that are comparable or better than those produced by VAE's.

Previous Work

Motivation

The authors point to several drawbacks currently associated with VAE's including:

  • over-regularisation induced by the KL divergence term within the objective (Tolstikhin et al., 2017)
  • posterior collapse in conjunction with powerful decoders (van den Oord et al., 2017)
  • increased variance of gradients caused by approximating expectations through sampling (Burda et al., 2015; Tucker et al., 2017)

These issues motivate their consideration of alternatives to the variational framework adopted by VAE's.

Furthermore, the authors consider VAE's introduction of random noise within the reparameterization [math]\displaystyle{ z = \mu(x) +\sigma(x)\epsilon }[/math] as having a regularization effect whereby it aids in learning a smoother latent space. This motivates their exploration of alternative regularisation methods for an auto-encoders that could be substituted in place of VAE's random noise injection, thus elimination the varational framework and its associated drawbacks.

Model Architecture

Experiment Results

Conclusion

Critiques

References