deep Generative Stochastic Networks Trainable by Backprop: Difference between revisions

From statwiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 11: Line 11:
Generalize autoencoders  
Generalize autoencoders  


GSN parameterixe transition operators of Markiv chain rather than P(X). Allows for training of unsupervised methods by gradient descent and ML no partition functions, just backprop
GSN parametrize transition operators of Markov chain rather than P(X). Allows for training of unsupervised methods by gradient descent and ML no partition functions, just backprop


graphical models have too many computations (inference, sampling, learning) MCMC can be used for estimation if only a few terms dominate the weighted sum that is being calculated.  
graphical models have too many computations (inference, sampling, learning) MCMC can be used for estimation if only a few terms dominate the weighted sum that is being calculated.  
Line 17: Line 17:
= Generative Stochastic Network (GSN) =  
= Generative Stochastic Network (GSN) =  
GSN relies on estimating the transition operator of a Markov chain.  
GSN relies on estimating the transition operator of a Markov chain.  
[[File:figure_1_bengio.png |thumb|upright=1.5|  Figure 1]]


[[File:figure_2_bengio.png |thumb|upright=1.5|  Figure 2]]


= Results =
== MNIST ==
[[File:figure_3_bengio.png |thumb|upright=2|  Figure 3]]
This is sentences that appear next to the image
== Faces ==
[[File:figure_4_bengio.png |thumb | upright=2|left | Figure 4]]
== Comparison ==





Revision as of 17:27, 18 November 2015

Introduction

The Deep Learning boom that has been seen in recent years was spurred initially by research in unsupervised learning techniques. However, most of the major successes over the last few years have mostly been based on supervised techniques.

Motivation

Unsupervised learning is attractive because the quantity of unlabelled data far exceeds that of labelled data

Avoiding intractable sums or maximization that is inherent in many unsupervised techniques

Generalize autoencoders

GSN parametrize transition operators of Markov chain rather than P(X). Allows for training of unsupervised methods by gradient descent and ML no partition functions, just backprop

graphical models have too many computations (inference, sampling, learning) MCMC can be used for estimation if only a few terms dominate the weighted sum that is being calculated.

Generative Stochastic Network (GSN)

GSN relies on estimating the transition operator of a Markov chain.

File:figure 1 bengio.png
Figure 1
File:figure 2 bengio.png
Figure 2


Results

MNIST

File:figure 3 bengio.png
Figure 3

This is sentences that appear next to the image

Faces

File:figure 4 bengio.png
Figure 4


Comparison

Critique

Mentions SPN