Difference between revisions of "Searching For Efficient Multi Scale Architectures For Dense Image Prediction"

From statwiki
Jump to: navigation, search
(Chain-structured neural networks)
(Chain-structured neural networks)
Line 41: Line 41:
  The search space is then parametrized by:
  The search space is then parametrized by:
  1) Number of layers n
  1) Number of layers n
  2) Type of operations can be executed on each layer
  2) Type of operations can be executed on each layer
  3) Hyperparameters associated with each layer
  3) Hyperparameters associated with each layer

Revision as of 13:59, 12 November 2018

[Need add more pics and references]


The design of neural network architectures is an important component for the success of machine learning and data science projects. In recent years, the field of Neural Architecture Search(NAS) has emerged, which is the study of automatically finding an optimal neural architecture in a given task in a well-defined architecture space. Often, the resulting architecture has outperformed human experts designed network in many tasks such as image classification and natural language processing.[2,3,4] This paper presents a method in finding a neural architecture that performs well in the task of Dense image segmentation.


Deep Neural network's success is largely due to the fact that it greatly reduces the work in Feature Engineering, as DNN has the ability to automatically extract useful features given the raw input. However, it created a new type of engineering work - network engineering. In order to successfully extract features, you need to have the corresponding network architecture. So what really happened is the engineering work is shifted from feature engineering to how to design the network so that it can better abstract useful features.

The motivation for NAS is that since there is no guiding theory on how to design the optimal network archtichture, given that we have abundant computational resources, one intuitive solution is to define a finite search space and let the computers do the dirty work of searching for structures and hyperparameters.

NAS Overview

NAS essentially turns a design problem into a search problem. As a search problem in general, we need a clear definition of three things:

  1. Search space
  2. Search strategy
  3. Performance Estimation Strategy


The search space is very intuitive to understand. In what hyperparameter space we should look for our optimal solution. In the field of NAS, the search space is heavily dependent on the assumption we make on the neural architecture. The search strategy details how to look explore the search space. The evaluation strategy is when we find a set of hyperparameters, how should we evaluate our model. In the field of NAS, it is typically to find architectures that achieve high predictive performance on unseen data. [5]

We will take a deep dive into the above three dimensions of NAS in the following sections

Search Space

There are typically three ways of defining the search space.

Chain-structured neural networks

Screen Shot 2018-11-10 at 6.03.00 PM.png

[5] The chain structed network can be viewd as sequence of n layers, where the layer [math] i[/math] recives input from [math] i-1[/math] layer and the output serves the input to layer [math] i+1[/math].

The search space is then parametrized by:
1) Number of layers n
2) Type of operations can be executed on each layer
3) Hyperparameters associated with each layer

Multi-branch networks

Screen Shot 2018-11-10 at 6.03.08 PM.png

[5] This architecture allows significantly more degrees of freedom. It allows shortcuts and parallel branches. Some of the ideas are inspired by human hand-crafted networks. For example, the shortcut from shallow layers directly to the deep layers are coming from networks like ResNet [6]

The search space includes the search space of chain-structured networks, with added additional freedom of adding shortcut connections and allowing parallel branches to exist.


Screen Shot 2018-11-10 at 6.03.31 PM.png

[6] This architecture defines a cell which is used as the building block of the neural network. A good analogy here is to think a cell as a lego piece, and you can define different types of cells as different lego pieces. And then you can combine them together to form a new neural structure.

The search space includes the internal structure of the cell and how to combine these blocks to form the resulting architecture.

What they used in this paper

[pic] [1] This paper's approach is very close to the number 3 above

The paper defines two components: The "network backbone" and a cell unit called "DPC". The network backbone's job is to take input image as a tensor and return a feature map f that is supposedly good abstraction of the image. The DPC is what they introduced in this paper, short for Dense Prediction Cell. The search space consists of what they choose for the network backbone and the internal structure of the DPC.

For the network backbone, they simply choose from existing mature architecture. They used networks like Mobile-Net-v2, Inception-Net, and e.t.c. For the structure of DPC, they define a smaller unit of called branch. A branch is a triple of (Xi, OP, Yi), where Xi is an input tensor, and OP is the operation that can be done on the tensor, and Yi is the resulting after the Operation.

In the paper, they set each DPC consists of 5 cells for the balance expressivity and computational tractability.

The operator space, OP, is defined as the following set of functions:

  1. Convolution with a 1 × 1 kernel.
  2. 3×3 atrous separable convolution with rate rh×rw, where rh and rw ∈ {1, 3, 6, 9, . . . , 21}.
  3. Average spatial pyramid pooling with grid size gh × gw, where gh and gw ∈ {1, 2, 4, 8}.

The operation spae has 1 + 8×8 + 4×4 = 81 functions in the operator space, resulting in i × 81 possible options. Therefore, for B = 5, the search space size is B! × 81^B ≈ 4.2 × 10^11 configurations.

Search Strategy

There are some common search strategies used in the field of NAS, such as Reinforcement learning, Random search, Evolution algorithm. The one they used in the paper is Random Search. It basically samples points from the search space uniformly at random as well as sampling some points that is close to the current observed best point. They quoted from another paper that claims random search performs the random search is competitive with reinforcement learning and other learning techniques [8].J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. In the implementation wise, they used a Google vizier, which is a search tool for black box optimization. [D. Golovin, B. Solnik, S. Moitra, G. Kochanski, J. Karro, and D. Sculley. Google vizier: A service for black-box optimization. In SIGKDD, 2017.] It is not open source, but there is an open source implementation of it https://github.com/tobegit3hub/advisor.

Performance Evaluation Strategy

The evaluation in this particular task is very tricky. The reason is we are evaluating neural network here. In order to evaluate it, we need to train it first. And we are doing pixel level classification on images with high resolutions, so the naive approach would require a tremendous amount of computational resources.

The way they solve it in the paper is defining a proxy task. The proxy task is a task that requires sufficient less computational resources, while can still give a good estimate of the performance of the network. In most image classical tasks of NAS, the proxy task is to train the network on images of lower resolution. The assumption is, if the network performs well on images with lower density, it should reasonably perform well on images with higher resolution.

However, the above approach does not work on this case. The reason is that the dense prediction tasks innately require high-resolution images as training data. The approach used in the paper is the flowing:

  1. Use a smaller backbone for proxy task
  2. caching the feature maps produced by the network backbone on the training set and directly building a single DPC on top of it
  3. Early stopping train for 30k iterations with a batch size of 8

If training on the large-scale backbone without fixing the weights of the backbone, they would need one week to train a network on a P100 GPU, but now they cut down the proxy task to be run 90 min. Then they rank the selected architectures, choosing the top 50 and do a full evaluation on it.

The evaluation metric they used is called mIOU, which is pixel level intersection over union. Which just the area of the intersection of the ground truth and the prediction over the area of the union of the ground truth and the prediction.


This method achieves state of art performances in many datasets. The following table quantifies the gain on performance on many datasets.

[pic] The chose to train on modified Xception network as a backbone, and the following are the resulting architecture for the DPC.


= Future work and real-world applications The author suggests that when increasing the number of branches in the DPC, there might be a further gain on the performance on the image segmentation task. However, although the random search in an exponentially growing space may become more challenging. There may need more intelligent search strategy.

There are some real-world applications that already deploy NAS techniques in production. The search technique described in this paper may be deployed in production if the cost can be driven down. Two good examples are Google AutoML and Microsoft Custom Vision AI. [9, 10] https://cloud.google.com/automl/ https://azure.microsoft.com/en-us/services/cognitive-services/custom-vision-service/


1. Rich man's game

The technique described in the paper can only be applied by parties with abundant computational resources, like Google, Facebook, Microsoft, and e.t.c. For small research groups and companies, this method is not that useful due to the lack of the computational power one process. Future improvement will be needed on the design an even more efficient proxy task that can tell whether a network will perform well that requires fewer computations. But here is the irony, if we can tell whether a network will perform well or not without training it, we would not need a search technique in the first place. So everything comes back to the fact that there is no guiding theory on deep learning.

2. Benefit/Cost ratio

The technique here does outperform human designed network in many cases, but the gain is not huge. In Cityscapes dataset, the performance gain is 0.7%, wherein PASCAL-Person-Part dataset, the gain is 3.7%, and the PASCAL VOC 2012 dataset, it does not outperform human experts. (All measured by mIOU) Even though the push of the state-of-the-art is always something that worth celebrating, but in practice, one would argue after spending so many resources doing the search, the computer should achieve superhuman performance level. (Like Chess Engine vs Chess Grand Master). In practice, one may simply go with the current state-of-the-art model to avoid the expensive search cost.

3. Still Heavily influenced by Human Bias When we define the search space, we introduced human bias. Firstly, the network backbone is chosen from previous matured architectures, which may not actually be optimal. Secondly, the internal branches in the DPC also consist with layers whose operations are defined by us humans. That also limits the search algorithm to find something revolutionary.


1. # Searching For Efficient Multi-Scale Architectures For Dense Image Prediction, [[1]].

2. # E. Real, A. Aggarwal, Y. Huang, and Q. V. Le. Regularized evolution for image classifier architecture search. arXiv:1802.01548, 2018.

3. #C. Liu, B. Zoph, M. Neumann, J. Shlens, W. Hua, L.-J. Li, L. Fei-Fei, A. Yuille, J. Huang, and K. Murphy. Progressive neural architecture search. In ECCV, 2018.

4. #B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le. Learning transferable architectures for scalable image recognition. In CVPR, 2018.

5. #Neural Architecture Search: A Survey [[2]]

6. #Deep Residual Learning for Image Recognition [[3]]