# Difference between revisions of "From Variational to Deterministic Autoencoders"

## Presented by

Partha Ghosh, Mehdi S. M. Sajjadi, Antonio Vergari, Michael Black, Bernhard Scholkopf

## Introduction

This paper presents an alternative framework for generative modeling that is deterministic. They suggest that sampling from a stochastic encoder within a VAE can be interpreted as injecting noise into the input of a deterministic decoder and propose a framework for a regularized deterministic autoencoder (RAE) to generate samples that are comparable or better than those produced by VAE's.

## Motivation

The authors point to several drawbacks currently associated with VAE's including:

• over-regularisation induced by the KL divergence term within the objective (Tolstikhin et al., 2017)
• posterior collapse in conjunction with powerful decoders (van den Oord et al., 2017)
• increased variance of gradients caused by approximating expectations through sampling (Burda et al., 2015; Tucker et al., 2017)

These issues motivate their consideration of alternatives to the variational framework adopted by VAE's.

Furthermore, the authors consider VAE's introduction of random noise within the reparameterization $z = \mu(x) +\sigma(x)\epsilon$ as having a regularization effect whereby it aids in learning a smoother latent space. This motivates their exploration of alternative regularization schemes for an auto-encoders that could be substituted in place of the VAE's random noise injection, thus eliminating their varational framework and its associated drawbacks.