deep Generative Stochastic Networks Trainable by Backprop: Difference between revisions
Jump to navigation
Jump to search
Dylandrover (talk | contribs) (Created page with "= Introduction = The Deep Learning boom that has been seen in recent years was spurred initially by research in unsupervised learning techniques. However, most of the major succ...") |
Dylandrover (talk | contribs) No edit summary |
||
Line 5: | Line 5: | ||
= Motivation = | = Motivation = | ||
Unsupervised learning is attractive because the quantity of unlabelled data far exceeds that of labelled data | |||
Avoiding intractable sums or maximization that is inherent in many unsupervised techniques | |||
Generalize autoencoders | |||
GSN parameterixe transition operators of Markiv chain rather than P(X). Allows for training of unsupervised methods by gradient descent and ML no partition functions, just backprop | |||
= Generative Stochastic Network (GSN) = | |||
= Critique = | = Critique = |
Revision as of 11:03, 18 November 2015
Introduction
The Deep Learning boom that has been seen in recent years was spurred initially by research in unsupervised learning techniques. However, most of the major successes over the last few years have mostly been based on supervised techniques.
Motivation
Unsupervised learning is attractive because the quantity of unlabelled data far exceeds that of labelled data
Avoiding intractable sums or maximization that is inherent in many unsupervised techniques
Generalize autoencoders
GSN parameterixe transition operators of Markiv chain rather than P(X). Allows for training of unsupervised methods by gradient descent and ML no partition functions, just backprop