Improving neural networks by preventing co-adaption of feature detectors: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 26: Line 26:
A separate validation set which evaluated the performance of a large number of different architectures was used to make those decisions, and then they chose the best performance architecture with dropout on the validation set so that they could apply it to the real test set.
A separate validation set which evaluated the performance of a large number of different architectures was used to make those decisions, and then they chose the best performance architecture with dropout on the validation set so that they could apply it to the real test set.
[[File:imagenet1.png|200px|thumb|left|alt text]]
[[File:imagenet1.png|200px|thumb|left|alt text]]


------
------

Revision as of 23:09, 26 November 2020

Presented by

Kyle Jung, Dae Hyun Kim, Seokho Lim, Stan Lee

Introduction to Dropout + Dataset

MNIST

TIMIT

Reuters

CNN

CIFAR-10

ImageNet

ImageNet is a dataset of millions of high-resolution labeled images in thousands of categories, and because of that, it is really challenging to achieve a decent score in terms of the accuracy.

Currently, the best score on this dataset is 45.7% by High-dimensional signature compression for large-scale image classification (J. Sanchez, F. Perronnin, CVPR11 (2011)). The authors of this paper could achieve a comparable performance of 48.6% error using a single neural network with five convolutional hidden layers with a max-pooling layer in between, followed by two globally connected layers and a final 1000-way softmax layer. Also, 42.4% could be achieved by using 50% dropout in the 6th hidden layer. c1 - mp - c2 - mp- c3 - mp - c4 - mp - c5 - mp - G1 - G2 - softmax (critique) They found out that making a large number of decisions was important for the architecture of the net design for the speech recognition (TIMIT) and object recognition datasets ( CIFAR-10 and ImageNet).

A separate validation set which evaluated the performance of a large number of different architectures was used to make those decisions, and then they chose the best performance architecture with dropout on the validation set so that they could apply it to the real test set.

alt text





A dataset of millions of labeled images in thousands of categories which were collected from the web and labelled by human labellers using MTerk tool (Amazon’s Mechanical Turk crowd-sourcing tool). ImageNet and CIFAR-10 are very similar, but the scale of ImageNet is about 20 times bigger (1.3M vs 60k). The size of ImageNet is about 1.3 million training images, 50000 validation images, and 150000 testing images.

Very difficult to have perfect accuracy on this dataset even for humans because the ImageNet images contain multiple instances of ImageNet objects and there are a large number of object classes.

They used resized images of 256 x 256 pixels for their experiments.

alt text

Conclusion