Learning the Number of Neurons in Deep Networks
Introduction
Due to the availability of large-scale datasets and powerful computation, Deep Learning has made huge breakthroughs in many areas, like Language Models and Computer Vision. In spite of this, building a very deep model is still challenging, especially for the very large dataset. In deep neural networks, we need to determine the number of layers and the number of neutrons in each layer, i.e, we need to determine the number of parameters, or complexity of the model. Typically, we do this by errors manually. The recent researches tend to build very deep networks. Building very deep networks means we need to learn more parameters, which leads to a significant cost on the memory of the equipment as well as the speed. Even though automatic model selection has developed in the past years by constructive and destructive approaches, there are some drawbacks. For constructive method, it starts a super shallow architecture, and then adds additional parameters at the process of learning