deep Generative Stochastic Networks Trainable by Backprop
Introduction
The Deep Learning boom that has been seen in recent years was spurred initially by research in unsupervised learning techniques. However, most of the major successes over the last few years have mostly been based on supervised techniques.
Motivation
Unsupervised learning is attractive because the quantity of unlabelled data far exceeds that of labelled data
Avoiding intractable sums or maximization that is inherent in many unsupervised techniques
Generalize autoencoders
GSN parametrize transition operators of Markov chain rather than P(X). Allows for training of unsupervised methods by gradient descent and ML no partition functions, just backprop
graphical models have too many computations (inference, sampling, learning) MCMC can be used for estimation if only a few terms dominate the weighted sum that is being calculated.
Generative Stochastic Network (GSN)
GSN relies on estimating the transition operator of a Markov chain.
Results
MNIST
This is sentences that appear next to the image
Faces
Comparison
Critique
Mentions SPN