Wasserstein Auto-Encoders: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 1: Line 1:


= Introduction =
= Introduction =
Recent years have seen a convergence of two previously distinct approaches: representation learning from high dimensional data, and unsupervised generative modeling. In the field that formed at the intersection, Variational Auto-Encoders (VAEs) and Generative Adversarial Networks (GANs) have emerged to be the most popular.
Recent years have seen a convergence of two previously distinct approaches: representation learning from high dimensional data, and unsupervised generative modeling. In the field that formed at the intersection, Variational Auto-Encoders (VAEs) and Generative Adversarial Networks (GANs) have emerged to be the most popular. VAEs are theoretically elegant but with the drawback that they tend to generate blurry samples when applied to natural images. GANs on the other hand produce better visual quality of sampled images, but come without an encoder, are harder to train and suffer from mode-collapse problem.


= Motivation =
= Motivation =

Revision as of 22:42, 11 March 2018

Introduction

Recent years have seen a convergence of two previously distinct approaches: representation learning from high dimensional data, and unsupervised generative modeling. In the field that formed at the intersection, Variational Auto-Encoders (VAEs) and Generative Adversarial Networks (GANs) have emerged to be the most popular. VAEs are theoretically elegant but with the drawback that they tend to generate blurry samples when applied to natural images. GANs on the other hand produce better visual quality of sampled images, but come without an encoder, are harder to train and suffer from mode-collapse problem.

Motivation

Proposed Method

Conclusion