deep Sparse Rectifier Neural Networks: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 3: Line 3:
Two trends in Deep Learning can be seen in terms of architecture improvements. The first is increasing sparsity (for example, see convolutional neural nets) and increasing biological plausibility (biologically plausible sigmoid neurons performing better than tanh neurons). Rectified linear neurons are good for sparsity and for biological plausibility, thus should increase performance.
Two trends in Deep Learning can be seen in terms of architecture improvements. The first is increasing sparsity (for example, see convolutional neural nets) and increasing biological plausibility (biologically plausible sigmoid neurons performing better than tanh neurons). Rectified linear neurons are good for sparsity and for biological plausibility, thus should increase performance.


== Biological Plausibility ==
== Biological Plausibility and Sparsity ==
 
In the brain, neurons rarely fire at the same time as a way to balance quality of representation and energy conservation. This is in stark contrast to sigmoid neurons which fire at 1/2 of their maximum rate when at zero. A solution to this problem is to use a rectifier neuron which does not fire at it's zero value.
 
 
<gallery>
Image:sig_neuron.png|Sigmoid and TANH Neuron
Image:lif_neuron.png|Leaky Integrate Fire Neuron
Image:rect_neuron.png|Rectified Linear Neuron
</gallery>


== Sparsity ==
== Sparsity ==

Revision as of 23:17, 9 November 2015

Introduction

Two trends in Deep Learning can be seen in terms of architecture improvements. The first is increasing sparsity (for example, see convolutional neural nets) and increasing biological plausibility (biologically plausible sigmoid neurons performing better than tanh neurons). Rectified linear neurons are good for sparsity and for biological plausibility, thus should increase performance.

Biological Plausibility and Sparsity

In the brain, neurons rarely fire at the same time as a way to balance quality of representation and energy conservation. This is in stark contrast to sigmoid neurons which fire at 1/2 of their maximum rate when at zero. A solution to this problem is to use a rectifier neuron which does not fire at it's zero value.


Sparsity

Experiments

Networks with rectifier neurons were applied to the domains of image recognition and sentiment analysis. The datasets for image recognition included both black and white (MNIST, NISTP), colour (CIFAR10) and stereo (NORB) images.

The datasets for sentiment analysis were taken from opentable.com and Amazon. The task of both was to predict the star rating based off the text blurb of the review.

Results

Results from image classification File:rectifier res 1.png

Results from sentiment classification File:rectifier res 2.png

In the NORB and sentiment analysis cases, the network benefited greatly from pre-training. However, the benefit in NORB diminished as the training set size grew.

The result from the Amazon dataset was 78.95%, while the state of the art was 73.72%.

Criticism

Rectifier neurons really aren't biologically plausible for a variety of reasons. Namely, the neurons in the cortex do not have tuning curves resembling the rectifier. Additionally, the ideal sparsity of the rectifier networks were from 50 to 80%, while the brain is estimated to have a sparsity of around 95 to 99%.