From Variational to Deterministic Autoencoders: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 9: Line 9:




== Motivation ==  
== Motivation ==
The authors point to several drawbacks currently associated with VAE's including:
- over-regularisation induced by the KL divergence term within the objective (Tolstikhin et al., 2017)
- posterior collapse conjunction with powerful decoders (van den Oord et al., 2017)
- increased variance in gradients caused by approximating expectations through sampling


These issues motivate the assessment of whether the variational framework adopted by VAE's is necessary for generative modelling and whether an alternative framework utlizing regularization of the decoder in place of random noise could produce superior generative results.
The authors are motivated to explore the use of regularisation to produce a smooth latent space where similar inputs are mapped to similar latent representation and for small variations in latent variables their decoded reconstruction vary only slightly.


== Model Architecture ==  
== Model Architecture ==  

Revision as of 02:15, 31 October 2020

Presented by

Partha Ghosh, Mehdi S. M. Sajjadi, Antonio Vergari, Michael Black, Bernhard Scholkopf

Introduction

This paper presents an alternative framework for generative modeling that is deterministic. They suggest that sampling from a stochastic encoder within a VAE can be interpreted as injecting noise into the input of a deterministic decoder and propose a framework for a regularized deterministic autoencoder (RAE) to generate samples that are comparable or better than those produced by VAE's.

Previous Work

Motivation

The authors point to several drawbacks currently associated with VAE's including: - over-regularisation induced by the KL divergence term within the objective (Tolstikhin et al., 2017) - posterior collapse conjunction with powerful decoders (van den Oord et al., 2017) - increased variance in gradients caused by approximating expectations through sampling

These issues motivate the assessment of whether the variational framework adopted by VAE's is necessary for generative modelling and whether an alternative framework utlizing regularization of the decoder in place of random noise could produce superior generative results.

The authors are motivated to explore the use of regularisation to produce a smooth latent space where similar inputs are mapped to similar latent representation and for small variations in latent variables their decoded reconstruction vary only slightly.

Model Architecture

Experiment Results

Conclusion

Critiques

References