From Variational to Deterministic Autoencoders: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 24: Line 24:
\mathcal{L}_{RAE} = \mathcal{L}_{REC} + \beta \mathcal{L}^{RAE}_Z + \lambda \mathcal{L}_{REG}
\mathcal{L}_{RAE} = \mathcal{L}_{REC} + \beta \mathcal{L}^{RAE}_Z + \lambda \mathcal{L}_{REG}
\end{align}
\end{align}
Where <math>\mathcal{L}_{REC}</math> is the reconstruction loss defined as the mean squared error between input samples and their mean reconstructions <math>\mu_{\theta}</math>. Formally it is defined as:
Where <math>\mathcal{L}_{REC}</math> is the reconstruction loss defined as the mean squared error between input samples and their mean reconstructions <math>\mu_{\theta}</math> by decoder that is deterministic. Formally it is defined as:
\begin{align}
\begin{align}
||\mathbf{x} - \mathbf{\mu_{\theta}} ||_2^2
||\mathbf{X} - \mathbf{\Mu_{\theta}}(E_{\phi}(\mathbf{X})||_2^2
\end{align}
\end{align}



Revision as of 13:23, 31 October 2020

Presented by

Partha Ghosh, Mehdi S. M. Sajjadi, Antonio Vergari, Michael Black, Bernhard Scholkopf

Introduction

This paper presents an alternative framework titled Regularized Autoencoders (RAEs) for generative modelling that is deterministic. They investigate how this stochasticity of VAEs could be substituted with implicit and explicit regularization schemes. Furthermore,the present a generative mechanism within a deterministic auto-encoder utilising an ex-post density estimation step that can also be applied to existing VAEs improving their sample quality. They further conduct an empirical comparison between VAEs and deterministic regularized auto-encoders and show the latter are able to generate samples that are comparable or better when applied to images and structured data.

Previous Work

The proposed method modifies the architecture of the existing Varational Autoencoder (VAE) (Kingma & Welling, 2014; Rezende et al., 2014).

Motivation

The authors point to several drawbacks currently associated with VAE's including:

  • over-regularisation induced by the KL divergence term within the objective (Tolstikhin et al., 2017)
  • posterior collapse in conjunction with powerful decoders (van den Oord et al., 2017)
  • increased variance of gradients caused by approximating expectations through sampling (Burda et al., 2015; Tucker et al., 2017)

These issues motivate their consideration of alternatives to the variational framework adopted by VAE's.

Furthermore, the authors consider VAE's introduction of random noise within the reparameterization [math]\displaystyle{ z = \mu(x) +\sigma(x)\epsilon }[/math] as having a regularization effect whereby it promotes the learning if a smoother latent space. This motivates their exploration of alternative regularization schemes for an auto-encoders that could be substituted in place of the VAE's random noise injection to produce equivalent or better generated samples. This would allow for the elimination of the variational framework and its associated drawbacks.

Framework Architecture

The Regularized Autoencoder proposes to modifications to existing VAEs architecture. Firstly, eliminating the injection of random noise [math]\displaystyle{ \epsilon }[/math] from the reparameterization of the latent variable [math]\displaystyle{ z = \mu(x) +\sigma(x)\epsilon }[/math]. Secondly, it proposes a new loss function [math]\displaystyle{ \mathcal{L}_{RAE} }[/math] defined as: \begin{align} \mathcal{L}_{RAE} = \mathcal{L}_{REC} + \beta \mathcal{L}^{RAE}_Z + \lambda \mathcal{L}_{REG} \end{align} Where [math]\displaystyle{ \mathcal{L}_{REC} }[/math] is the reconstruction loss defined as the mean squared error between input samples and their mean reconstructions [math]\displaystyle{ \mu_{\theta} }[/math] by decoder that is deterministic. Formally it is defined as: \begin{align} ||\mathbf{X} - \mathbf{\Mu_{\theta}}(E_{\phi}(\mathbf{X})||_2^2 \end{align}

Experiment Results

Conclusion

Critiques

References