goingDeeperWithConvolutions: Difference between revisions
Jump to navigation
Jump to search
Line 1: | Line 1: | ||
= Introduction = | = Introduction = | ||
In the last three years, due to the advances of deep learning and more concretely convolutional networks, the quality of image recognition has increased dramatically. The error rates for ILSVRC competition dropped significantly year by year.[http://image-net.org/challenges/LSVRC/ LSVRC] This paper proposed a new deep convolutional neural network architecture codenamed Inception. With the inception module and carefully crafted design researchers build a 22 layers deep network called Google Lenet, which uses 12X fewer parameters while being significantly more accurate than the winners of ILSVRC 2012. [http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf [1]] | In the last three years, due to the advances of deep learning and more concretely convolutional networks[http://white.stanford.edu/teach/index.php/An_Introduction_to_Convolutional_Neural_Networks an introduction of CNN] , the quality of image recognition has increased dramatically. The error rates for ILSVRC competition dropped significantly year by year.[http://image-net.org/challenges/LSVRC/ LSVRC] This paper proposed a new deep convolutional neural network architecture codenamed Inception. With the inception module and carefully crafted design researchers build a 22 layers deep network called Google Lenet, which uses 12X fewer parameters while being significantly more accurate than the winners of ILSVRC 2012. [http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf [1]] |
Revision as of 14:13, 20 October 2015
Introduction
In the last three years, due to the advances of deep learning and more concretely convolutional networksan introduction of CNN , the quality of image recognition has increased dramatically. The error rates for ILSVRC competition dropped significantly year by year.LSVRC This paper proposed a new deep convolutional neural network architecture codenamed Inception. With the inception module and carefully crafted design researchers build a 22 layers deep network called Google Lenet, which uses 12X fewer parameters while being significantly more accurate than the winners of ILSVRC 2012. [1]