Another look at distance-weighted discrimination: Difference between revisions

From statwiki
Jump to navigation Jump to search
(Created page with "== Presented by == Yuwei Liu, Daniel Mao == Introduction == This paper presents a deep convolutional neural network architecture codenamed Inception. This newly designed ar...")
 
 
(2 intermediate revisions by the same user not shown)
Line 3: Line 3:


== Introduction ==  
== Introduction ==  
This paper presents a deep convolutional neural network architecture codenamed Inception. This newly designed architecture enhances the utilization of the computing resources by increasing the depth and width of the network, while maintaining the computational budget constraint. DNNs are powerful because they can perform arbitrary parallel computation for a modest number of steps. The proposed architecture was implemented through a 22 layers deep network called GoogLeNet and significantly outperformed the state of the art in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Distance Weighted Discrimination (DWD) is a marginal-based classifier with advantages compared with support vector machines (SVM). But calculations and theories are relatively complicated for DWD. This paper proposed a new thrifty algorithm to solve the standard problem DWD and generalized DWD, which is faster than the most advanced algorithm based on second-order cone programming. The paper also formulate a natural kernel DWD approach and establish the Bayes risk consistency of the kernel DWD, solving a theoretical problem in the DWD literature. By study of data sets, generalized DWD is showed that has higher classification accuracy with less computation time than the SVM.


== Previous Work ==  
== Previous Work ==  


The current architecture is built on the network-in-network approach proposed by Lin et al.[1] for the purpose of increase the representation power of the neural networks. They added additional 1 X 1 convolutional layers, serving as dimension reduction modules to significantly reduce the number of parameters of the model. The paper also took inspiration from the Regions with Convolutional Neural Networks (R-CNN) proposed by Girshick et al. [2]. The overall detection problem is divided into two subproblems:
DWD was proposed by Marron et al. (2007) [1] which retains the elegant geometric interpretation of the SVM, resolves a 'data piling' issue and reveals competitive performance. SOCP was used to solve DWD by reformulating problem (Alizadeh and Goldfarb, 2004; Boyd and Vandenberghe, 2004). [2]


1. Utilize low-level cues for potential object proposals
Second, the kernel extension of DWD and the corresponding kernel learning theory are still undeveloped. In contrast, the kernel SVM as well as kernel logistic regression (Wahba et al.,1994; Zhu and Hastie, 2005) have mature theoretical understandings built on the theory of reproducing kernel Hilbert spaces (RKHSs) (Wahba, 1999; Hastie et al., 2009). How to establish the Bayes risk consistency of kernel DWD was proposed as a fundamental open problem in the original DWD paper (Marron et al., 2007).
 
2. Classify object categories with CNN


== Motivation ==  
== Motivation ==  

Latest revision as of 02:02, 13 November 2021

Presented by

Yuwei Liu, Daniel Mao

Introduction

Distance Weighted Discrimination (DWD) is a marginal-based classifier with advantages compared with support vector machines (SVM). But calculations and theories are relatively complicated for DWD. This paper proposed a new thrifty algorithm to solve the standard problem DWD and generalized DWD, which is faster than the most advanced algorithm based on second-order cone programming. The paper also formulate a natural kernel DWD approach and establish the Bayes risk consistency of the kernel DWD, solving a theoretical problem in the DWD literature. By study of data sets, generalized DWD is showed that has higher classification accuracy with less computation time than the SVM.

Previous Work

DWD was proposed by Marron et al. (2007) [1] which retains the elegant geometric interpretation of the SVM, resolves a 'data piling' issue and reveals competitive performance. SOCP was used to solve DWD by reformulating problem (Alizadeh and Goldfarb, 2004; Boyd and Vandenberghe, 2004). [2]

Second, the kernel extension of DWD and the corresponding kernel learning theory are still undeveloped. In contrast, the kernel SVM as well as kernel logistic regression (Wahba et al.,1994; Zhu and Hastie, 2005) have mature theoretical understandings built on the theory of reproducing kernel Hilbert spaces (RKHSs) (Wahba, 1999; Hastie et al., 2009). How to establish the Bayes risk consistency of kernel DWD was proposed as a fundamental open problem in the original DWD paper (Marron et al., 2007).

Motivation

The performance of deep neural networks can be improved by increasing the depth and the width of the networks. However, this suffers two major bottlenecks. One disadvantage is that the enlarged network tends to overfit the train data, especially if there is only limited labeled examples. The other drawback is the dramatic increase in computational resources when learning large number of parameters.

The fundamental way of handling both problems would be to use sparsely connected instead of fully connected networks and, at the same time, make numerical calculation on non-uniform sparse data structures efficient. Therefore, the inception architecture was motivated by Arora et al. [3] and Catalyurek et al. [4] and overcome these difficulties by clustering sparse matrices into relatively dense submatrices. It takes advantage of both extra sparsity and existing computational hardware.

Model Architecture

The Inception architecture consists of stacking blocks called the inception modules. The idea is that to increase the depth and width of model by finding local optimal sparse structure and repeating it spatially. Traditionally, in each layer of convolutional network pooling operation and convolution and its size (1 by 1, 3 by 3 or 5 by 5) should be decided while all of them are beneficial for the modeling power of the network. Whereas, in Inception module instead of choosing, all these various options are computed simultaneously (Fig. 1a). Inspired by layer-by-layer construction of Arora et al. [3], in Inception module statistics correlation of the last layer is analyzed and clustered into groups of units with high correlation. These clusters form units of next layer and are connected to the units of previous layer. Each unit from the earlier layer corresponds to some region of the input image and the outputs of them are concatenated into a filter bank. Additionally, because of the beneficial effect of pooling in the convolutional networks, a parallel path of pooling has been added in each module. The Inception module in its naïve form (Fig. 1a) suffers from high computation and power cost. In addition, as the concatenated output from the various convolutions and the pooling layer will be an extremely deep channel of output volume, the claim that this architecture has an improved memory and computation power use looks like counterintuitive. However, this issue has been addressed by adding a 1 by 1 convolution before costly 3 by 3 and 5 by 5 convolutions. The idea of 1 by 1 convolution was first introduced by Lin et al. and called network in network [1]. This 1x1 convolution mathematically is equivalent to a multilayer perceptron which reduces the dimension of filter space (the depth of the output volume) and on top of that they also act as a non-linear rectifying activation layer ReLu to add to the non-linearity immediately after each 1 by 1 convolution (Fig. 1b). This enables less over-fitting due to smaller Kernel size (1 by 1). This distinctive dimensionality reduction feature of the 1 by 1 convolution allows shielding of the large number of input filters of the previous stage to the next stage (Footnote 2).

Figure 1(a): Inception module, naïve version
Figure 1(b): Inception module with dimension reductions

The combination of various layers of convolution has some similarity with human eyes in interpreting the visual information in a sense that human eyes also process the visual information at various scale and combines to extract the features from different scale simultaneously. Similarly, in inception design network in network designs extract the fine grain details of input volume while medium- and large-sized filters cover a large receptive field of the inputs and extract their features and with pooling operations overfitting can be overcome by reducing the spatial sizes.

ILSVRC 2014 Challenge Results

The proposed architecture was implemented through a deep network called GoogLeNet as a submission for ILSVRC14’s Classification Challenge and Detection Challenge.

The classification challenge is to classify images into one of 1000 categories in the Imagenet hierarchy. The top-5 error rate - the percentage of test examples for which the correct class is not in the top 5 predicted classes - is used for measuring accuracy. The results of the classification challenge is shown in Table 1. The final submission of GoogLeNet obtains a top-5 error of 6.67% on both the validation and testing data, ranking first among all participants, significantly outperforming top teams in previous years, and not utilizing external data.

Image: 500 pixels
Image: 500 pixels
Table 1: Classification performance

The ILSVRC detection challenge asks to produce bounding boxes around objects in images among 200 classes. Detected objects count as correct if they match the class of the groundtruth and their bounding boxes overlap by at least 50%. Each image may contain multiple objects (with different scales) or none. The mean average precision (mAP) is used to report performance. The results of the detection challenge is listed in Table 2. Using the Inception model as a region classifier, combining Selective Search and using an ensemble of 6 CNNs, GoogLeNet gave top detection results, almost doubling accuracy of the the 2013 top model.

Image: 600 pixels
Image: 600 pixels
Table 2: Detection performance

Conclusion

Googlenet outperformed the other previous deep learning networks, and it became a proof of concept that approximating the expected optimal sparse structure by readily available dense building blocks (or the inception modules) is a viable method for improving the neural networks in computer vision. The significant quality gain is at a modest increase for the computational requirement is the main advantage for this method. Even without performing any bounding box operations to detect objects, this architecture gained a significant amount of quality with a modest amount of computational resources.

Critiques

By using nearly 5 million parameters, GoogLeNet, compared to previous architectures like VGGNet and AlexNet, reduced the number of parameters in the network by almost 92%. This enabled Inception to be used for many big data applications where a huge amount of data was needed to be processed at a reasonable cost while the computational capacity was limited. However, the Inception network is still complex and susceptible to scaling. If the network is scaled up, large parts of the computational gains can be lost immediately. Also, there was no clear description about the various factors that lead to the design decision of this inception architecture, making it harder to adapt to other applications while maintaining the same computational efficiency.

-

References

[1] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. CoRR, abs/1312.4400, 2013.

[2] Ross B. Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Computer Vision and Pattern Recognition, 2014. CVPR 2014. IEEE Conference on, 2014.

[3] Sanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for learning some deep representations. CoRR, abs/1310.6343, 2013.

[4] ¨Umit V. C¸ ataly¨urek, Cevdet Aykanat, and Bora Uc¸ar. On two-dimensional sparse matrix partitioning: Models, methods, and a recipe. SIAM J. Sci. Comput., 32(2):656–683, February 2010.

Footnote 1: Hebbian theory is a neuroscientific theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptation of brain neurons during the learning process.

Footnote 2: For more explanation on 1 by 1 convolution refer to: https://iamaaditya.github.io/2016/03/one-by-one-convolution/