From Variational to Deterministic Autoencoders: Difference between revisions
Line 11: | Line 11: | ||
== Motivation == | == Motivation == | ||
The authors point to several drawbacks currently associated with VAE's including: | The authors point to several drawbacks currently associated with VAE's including: | ||
* over-regularisation induced by the KL divergence term within the objective (Tolstikhin et al., 2017) | |||
* posterior collapse in conjunction with powerful decoders (van den Oord et al., 2017) | |||
* increased variance of gradients caused by approximating expectations through sampling (Burda et al., 2015; Tucker et al., 2017) | |||
These issues motivate | These issues motivate their consideration of alternatives to the variational framework adopted by VAE's. | ||
Furthermore, the authors consider VAE's introduction of random noise within the reparameterization <math> z = \mu(x) +\sigma(x)\epsilon </math> as having a regularization effect whereby it aids in learning a smoother latent space. This motivates their exploration of alternative regularisation methods for an auto-encoders that could be substituted in place of VAE's random noise injection, thus elimination the varational framework and its associated drawbacks. | |||
== Model Architecture == | == Model Architecture == |
Revision as of 01:55, 31 October 2020
Presented by
Partha Ghosh, Mehdi S. M. Sajjadi, Antonio Vergari, Michael Black, Bernhard Scholkopf
Introduction
This paper presents an alternative framework for generative modeling that is deterministic. They suggest that sampling from a stochastic encoder within a VAE can be interpreted as injecting noise into the input of a deterministic decoder and propose a framework for a regularized deterministic autoencoder (RAE) to generate samples that are comparable or better than those produced by VAE's.
Previous Work
Motivation
The authors point to several drawbacks currently associated with VAE's including:
- over-regularisation induced by the KL divergence term within the objective (Tolstikhin et al., 2017)
- posterior collapse in conjunction with powerful decoders (van den Oord et al., 2017)
- increased variance of gradients caused by approximating expectations through sampling (Burda et al., 2015; Tucker et al., 2017)
These issues motivate their consideration of alternatives to the variational framework adopted by VAE's.
Furthermore, the authors consider VAE's introduction of random noise within the reparameterization [math]\displaystyle{ z = \mu(x) +\sigma(x)\epsilon }[/math] as having a regularization effect whereby it aids in learning a smoother latent space. This motivates their exploration of alternative regularisation methods for an auto-encoders that could be substituted in place of VAE's random noise injection, thus elimination the varational framework and its associated drawbacks.