dropout: Difference between revisions

From statwiki
Jump to navigation Jump to search
(Created page with "= Introduction = Dropout is one of the techniques for preventing overfitting in deep neural network which contains a large number of parameters. The key idea is to randomly drop ...")
 
No edit summary
Line 5: Line 5:


by dropping a unit out, we mean temporarily removing it from the network, along with all its incoming and outgoing connections, as shown in Figure 1. Each unit is retrained with probability p independent of other units (p is usually set to 0.5, which seems to be close to optimal for a wide range of networks and tasks).
by dropping a unit out, we mean temporarily removing it from the network, along with all its incoming and outgoing connections, as shown in Figure 1. Each unit is retrained with probability p independent of other units (p is usually set to 0.5, which seems to be close to optimal for a wide range of networks and tasks).
= Model =
Consider a neural network with <math>\ L </math> hidden layer. Let <math>\bold{z^{(l)}} </math> denote the vector inputs into layer <math> l </math>, <math>\bold{y}^{(l)} </math> denote the vector of outputs from layer <math> l </math>. <math>\ \bold{W}^{(l)} </math> and <math>\ \bold{b}^{(l)} </math> are the weights and biases at layer <math>l </math>. With dropout, the feed-forward operation becomes:
[[<math>\ L </math> | \center ]]

Revision as of 22:37, 2 November 2015

Introduction

Dropout is one of the techniques for preventing overfitting in deep neural network which contains a large number of parameters. The key idea is to randomly drop units from the neural network during training. During training, dropout samples from an exponential number of different “thinned” network. At test time, we approximate the effect of averaging the predictions of all these thinned networks.

Demonstration

by dropping a unit out, we mean temporarily removing it from the network, along with all its incoming and outgoing connections, as shown in Figure 1. Each unit is retrained with probability p independent of other units (p is usually set to 0.5, which seems to be close to optimal for a wide range of networks and tasks).

Model

Consider a neural network with [math]\displaystyle{ \ L }[/math] hidden layer. Let [math]\displaystyle{ \bold{z^{(l)}} }[/math] denote the vector inputs into layer [math]\displaystyle{ l }[/math], [math]\displaystyle{ \bold{y}^{(l)} }[/math] denote the vector of outputs from layer [math]\displaystyle{ l }[/math]. [math]\displaystyle{ \ \bold{W}^{(l)} }[/math] and [math]\displaystyle{ \ \bold{b}^{(l)} }[/math] are the weights and biases at layer [math]\displaystyle{ l }[/math]. With dropout, the feed-forward operation becomes:

[[[math]\displaystyle{ \ L }[/math] | \center ]]