From Variational to Deterministic Autoencoders: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 17: Line 17:
These issues motivate their consideration of alternatives to the variational framework adopted by VAE's.  
These issues motivate their consideration of alternatives to the variational framework adopted by VAE's.  


Furthermore, the authors consider VAE's introduction of random noise  within the reparameterization  <math> z = \mu(x) +\sigma(x)\epsilon </math>  as having a regularization effect whereby it aids in learning a smoother latent space. This motivates their exploration of alternative regularization schemes for an auto-encoders that could be substituted in place of the VAE's random noise injection to produce equivalent or better generated samples. This would allow for the elimination the variational framework and its associated drawbacks.
Furthermore, the authors consider VAE's introduction of random noise  within the reparameterization  <math> z = \mu(x) +\sigma(x)\epsilon </math>  as having a regularization effect whereby it promotes the learning if a smoother latent space. This motivates their exploration of alternative regularization schemes for an auto-encoders that could be substituted in place of the VAE's random noise injection to produce equivalent or better generated samples. This would allow for the elimination of the variational framework and its associated drawbacks.


== Model Architecture ==  
== Model Architecture ==  

Revision as of 11:44, 31 October 2020

Presented by

Partha Ghosh, Mehdi S. M. Sajjadi, Antonio Vergari, Michael Black, Bernhard Scholkopf

Introduction

This paper presents an alternative framework for generative modelling that is deterministic. They investigate how this stochasticity of VAE's could be substituted with implicit and explicit regularization schemes. Furthermore,the present a generative mechanism within a deterministic auto-encoder utilising an ex-post density estimation step that can also be applied to existing VAE's improving their sample quality. They further conduct an empirical comparison between VAE's and deterministic regularized auto-encoders and show the latter are able to generate samples that are comparable or better when applied to images and structured data.

Previous Work

Motivation

The authors point to several drawbacks currently associated with VAE's including:

  • over-regularisation induced by the KL divergence term within the objective (Tolstikhin et al., 2017)
  • posterior collapse in conjunction with powerful decoders (van den Oord et al., 2017)
  • increased variance of gradients caused by approximating expectations through sampling (Burda et al., 2015; Tucker et al., 2017)

These issues motivate their consideration of alternatives to the variational framework adopted by VAE's.

Furthermore, the authors consider VAE's introduction of random noise within the reparameterization [math]\displaystyle{ z = \mu(x) +\sigma(x)\epsilon }[/math] as having a regularization effect whereby it promotes the learning if a smoother latent space. This motivates their exploration of alternative regularization schemes for an auto-encoders that could be substituted in place of the VAE's random noise injection to produce equivalent or better generated samples. This would allow for the elimination of the variational framework and its associated drawbacks.

Model Architecture

Experiment Results

Conclusion

Critiques

References