Learning the Number of Neurons in Deep Networks

From statwiki
Jump to navigation Jump to search

Introduction

Due to the availability of large-scale datasets and powerful computation, Deep Learning has made huge breakthroughs in many areas, like Language Models and Computer Vision. In spite of this, building a very deep model is still challenging, especially for the very large datasets. In deep neural networks, we need to determine the number of layers and the number of neurons in each layer, i.e, we need to determine the number of parameters, or complexity of the model. Typically, this is determined by errors manually.

In this paper, we use an approach to automatically choose the number of neurons in each layer when we learn the network. Our approach introduces a group sparsity regularizer on the parameters of the network, and each group acts on the parameters of one neuron, rather than trains an initial network as as pre-processing step(training shallow or thin networks to mimic the behaviour of deep ones [Hinto et al., 2014, Romero et al., 2015]). We set those useless parameters to zero, which cancels out the effects of a particular neuron. Therefore, our approach does not need to learn the redundant network successfully and then reduce its parameters, instead, it learns the number of relevant neutrons in each layer and the parameters of those neurons simultaneously.

Related Work

Model Training and Model Selection

Experiment

Set Up

Results

Analysis on Testing

Conclusion

References