deep Sparse Rectifier Neural Networks: Difference between revisions
Jump to navigation
Jump to search
(Created page with "== Introduction == Rectified linear neurons are good for sparsity and for biological plausibility. == Method == == Results == == Criticism == Rectified linear neurons really...") |
|||
Line 1: | Line 1: | ||
== Introduction == | == Introduction == | ||
Rectified linear neurons are good for sparsity and for biological plausibility. | Two trends in Deep Learning can be seen in terms of architecture improvements. The first is increasing sparsity (for example, see convolutional neural nets) and increasing biological plausibility (biologically plausible sigmoid neurons performing better than tanh neurons). Rectified linear neurons are good for sparsity and for biological plausibility, thus should increase performance. | ||
== Method == | == Method == |
Revision as of 21:08, 9 November 2015
Introduction
Two trends in Deep Learning can be seen in terms of architecture improvements. The first is increasing sparsity (for example, see convolutional neural nets) and increasing biological plausibility (biologically plausible sigmoid neurons performing better than tanh neurons). Rectified linear neurons are good for sparsity and for biological plausibility, thus should increase performance.
Method
Results
Criticism
Rectified linear neurons really aren't biologically plausible for a variety of reasons. However, they can be transformed into spiking neural networks.