Fix your classifier: the marginal value of training the last weight layer: Difference between revisions

From statwiki
Jump to navigation Jump to search
No edit summary
Line 1: Line 1:
==Intuition==
==Introduction==


Deep neural networks have become a widely used model for machine learning, achieving state-of-the-art results on many tasks. The most common task these models are used for is to perform classification, as in the case of convolutional neural networks (CNNs) used to classify images to a semantic category. Typically, a learned affine transformation is placed at the end of such models, yielding a per-class value used for classification. This classifier can have
Deep neural networks have become a widely used model for machine learning, achieving state-of-the-art results on many tasks. The most common task these models are used for is to perform classification, as in the case of convolutional neural networks (CNNs) used to classify images to a semantic category. Typically, a learned affine transformation is placed at the end of such models, yielding a per-class value used for classification. This classifier can have

Revision as of 09:47, 27 November 2018

Introduction

Deep neural networks have become a widely used model for machine learning, achieving state-of-the-art results on many tasks. The most common task these models are used for is to perform classification, as in the case of convolutional neural networks (CNNs) used to classify images to a semantic category. Typically, a learned affine transformation is placed at the end of such models, yielding a per-class value used for classification. This classifier can have a vast number of parameters, which grows linearly with the number of possible classes, thus requiring increasingly more computational resources.

Brief Overview

In order to alleviate the aforementioned problem, the authors propose that the final layer of the classifier be fixed (upto a global scale constant). They argue that with little or no loss of accuracy for most classification tasks, the method provides significant memory and computational benefits. In addition, they show that by initializing the classifier with a Hadamard matrix the inference could be made faster as well.

Previous Work

Training NN models and using them for inference requires large amounts of memory and computational resources; thus, extensive amount of research has been done lately to reduce the size of networks which are as follows:

  • Weight sharing and specification (Han et al., 2015)
  • Mixed precision to reduce the size of the neural networks by half (Micikevicius et al., 2017)
  • Low-rank approximations to speed up CNN (Tai et al., 2015)
  • Quantization of weights, activations and gradients to further reduce computation during training (Hubara et al., 2016b, Li et al., 2016 and Zhou et al., 2016)