Wasserstein Auto-Encoders

From statwiki
Jump to navigation Jump to search

Introduction

Recent years have seen a convergence of two previously distinct approaches: representation learning from high dimensional data, and unsupervised generative modeling. In the field that formed at the intersection, Variational Auto-Encoders (VAEs) and Generative Adversarial Networks (GANs) have emerged to be the most popular. VAEs are theoretically elegant but with the drawback that they tend to generate blurry samples when applied to natural images. GANs on the other hand produce better visual quality of sampled images, but come without an encoder, are harder to train and suffer from mode-collapse problem.

Motivation

Proposed Method

Conclusion