# Difference between revisions of "PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space"

(→Experiments) |
|||

Line 90: | Line 90: | ||

The digit dataset, MNIST, was converted to a 2D point cloud. Pixel intensities were normalized in the range of <math>[0, 1]</math>, and only pixels with intensities larger than 0.5 were considered. The coordinate system was set at the centre of the image. PointNet++ achieved a classification error of 0.51%. The original PointNet had 0.78% classification error. The table below compares these results to the state-of-the-art. | The digit dataset, MNIST, was converted to a 2D point cloud. Pixel intensities were normalized in the range of <math>[0, 1]</math>, and only pixels with intensities larger than 0.5 were considered. The coordinate system was set at the centre of the image. PointNet++ achieved a classification error of 0.51%. The original PointNet had 0.78% classification error. The table below compares these results to the state-of-the-art. | ||

+ | |||

+ | [[File:mnist_results.png | 300px|thumb|center|MNIST classification results.]] | ||

In addition, the ModelNet40 dataset was used. This dataset consists of CAD models. Three dimensional point clouds were sampled from mesh surfaces of the ModelNet40 shapes. The classification results from this dataset are shown below. | In addition, the ModelNet40 dataset was used. This dataset consists of CAD models. Three dimensional point clouds were sampled from mesh surfaces of the ModelNet40 shapes. The classification results from this dataset are shown below. | ||

+ | |||

+ | [[File:modelnet40.png | 300px|thumb|center|ModelNet40 classification results.]] | ||

+ | |||

+ | An experiment was performed to show how the accuracy was affected by the number of points used. With PointNet++ using multi-scale grouping and dropout, the performance decreased by less than 1% when 1024 test points was reduced to 256. | ||

+ | |||

+ | [[File:num_points_acc.png | 300px|thumb|center|Relationship between accuracy and the number of points used for classification.]] | ||

=Sources= | =Sources= |

## Revision as of 12:42, 17 March 2018

# Introduction

This paper builds off of ideas from PointNet (Qi et al., 2017). The name PointNet is derived from the network's input - a point cloud. A point cloud is a set of three dimensional points that each have coordinates [math] (x,y,z) [/math]. These coordinates usually represent the surface of an object. For example, a point cloud describing the shape of a torus is shown below.

Processing point clouds is important in applications such as autonomous driving where point clouds are collected from an onboard LiDAR sensor. These point clouds can then be used for object detection. However, point clouds are challenging to process because:

- They are unordered. If [math] N [/math] is the number of points in a point cloud, then there are [math] N! [/math] permutations that the point cloud can be represented.
- The spatial arrangement of the points contains useful information, thus it needs to be encoded.
- The function processing the point cloud needs to be invariant to transformations such as rotation and translations of all points.

Previously, typical point cloud processing methods handled the challenges of point clouds by transforming the data with a 3D voxel grid or by representing the point cloud with multiple 2D images. When PointNet was introduced, it was novel because it directly took points as its input. PointNet++ improves on PointNet by using a hierarchical method to better capture local structures of the point cloud.

# Review of PointNet

The PointNet architecture is shown below. The input of the network is [math] n [/math] points, which each have [math] (x,y,z) [/math] coordinates. Each point processed individually through a multi-layer perceptron (MLP). This network creates an encoding for each point; in the diagram, each point is represented by a 1024 dimension vector. Then, using a max pool layer a vector is created, that represents the "global signature" of a point cloud. If classification is the task, this global signature is processed by another MLP to compute the classification scores. If segmentation is the task, this global signature is appended to to each point from the "nx64" layer, and these points are processed by a MLP to compute a semantic category score for each point.

The core idea of the network is to learn a symmetric function on transformed points. Through the T-Nets and the MLP network, a transformation is learned with the hopes of making points invariant to point cloud transformations. Learning a symmetric function solves the challenge imposed by having unordered points; a symmetric function will produce the same value no matter the order of the input. This symmetric function is represented by the max pool layer.

# PointNet++

The motivation for PointNet++ is that PointNet does not capture local, fine-grained details. Since PointNet performs a max pool layer over all of its points, information such as the local interaction between points is lost.

## Problem Statement

There is a metric space [math] X = (M,d) [/math] where [math]d[/math] is the metric from a Euclidean space [math]\pmb{\mathbb{R}}^n[/math] and [math] M \subseteq \pmb{\mathbb{R}}^n [/math] is the set of points. The goal is to learn a function that takes [math]X[/math] as the input as outputs a a class or per point label to each member of [math]M[/math].

## Method

### High Level Overview

The PointNet++ architecture is shown on the right. The core idea is that a hierarchical architecture is used and at each level of the hierarchy a set of points is processed and abstracted to a new set with less points, i.e.,

\begin{aligned} \text{Input at each level: } N \times (d + c) \text{ matrix} \end{aligned}

where [math]N[/math] is the number of points, [math]d[/math] is the coordinate points [math](x,y,z)[/math] and [math]c[/math] is the feature representation of each point, and

\begin{aligned} \text{Output at each level: } N' \times (d + c') \text{ matrix} \end{aligned}

where [math]N'[/math] is the new number (smaller) of points and [math]c'[/math] is the new feature vector.

Each level has three layers: Sampling, Grouping, and PointNet. The Sampling layer selects points that will act as centroids of local regions within the point cloud. The Grouping layer then finds points near these centroids. Lastly, the PointNet layer performs PointNet on each group to encode local information.

### Sampling Layer

The input of this layer is a set of points [math]{\{x_1,x_2,...,x_n}\}[/math]. The goal of this layer is to select a subset of these points [math]{\{\hat{x}_1, \hat{x}_2,...,\hat{x}_m\}} [/math] that will define the centroid of local regions.

To select these points farthest point sampling is used. This is where [math]\hat{x}_j[/math] is the most distant point with regards to [math]{\{\hat{x}_1, \hat{x}_2,...,\hat{x}_{j-1}\}}[/math]. This ensures coverage of the entire point cloud opposed to random sampling.

### Grouping Layer

The object of the grouping layer is to form local regions around each centroid by group points near the selected centroids. The input is a point set of size [math]N x (d + c)[/math] and the coordinates of the centroids [math]N' \times d[/math]. The output is the groups of points within each region [math]N' \times k \times (d+c)[/math] where [math]k[/math] is the number of points in each region.

Note that [math]k[/math] can vary per group. Later, the PointNet layer creates a feature vector that has the same size for all regions at the hierarchical level.

To determine which points belong to a group ball query is used; all points within a radius of the centroid are grouped. This is advantageous over nearest neighbour because it guarantees a fixed region space, which is important when learning local structure,

### PointNet Layer

After grouping, PointNet is applied to the points. However, first the coordinates of points in a local region are converted to a local coordinate frame by [math] x_i = x_i - \bar{x}[/math] where [math]\bar{x}[/math] is the coordinates of the centroid.

### Robust Feature Learning under Non-Uniform Sampling Density

The previous description of grouping uses a single scale. This is not optimal because the density varies per section of the point cloud. At each level, it would be better if the PointNet layer was applied to adaptively sized groups depending on the point cloud density.

The two grouping methods the authors propose are shown in the diagram below. Multi-scale grouping (MSG) applies PointNet at various scales per group. The features from the various scales are concatenated. This method, however, is computationally expensive because for each region it always applies PointNet to all points. On the other hand, multi-resolution grouping (MRG) is less computationally expensive but still adaptively collects features. As shown in the diagram, the left vector is obtained by applying PointNet to three points, and these three points obtained information from three groups. This vector is then concatenated by a vector that is created by using PointNet on all the points in the level below. The second vector can be weighed more heavily if the first vector contains a sparse amount of points.

## Point Cloud Segmentation

If the task is segmentation, the architecture is slightly modified.

## Experiments

To validate the effectiveness of PointNet++, experiments in three areas were performed - classification in Euclidean metric space, semantic scene labelling, and classification in non-Euclidean space.

### Point Set Classification in Euclidean Metric Space

The digit dataset, MNIST, was converted to a 2D point cloud. Pixel intensities were normalized in the range of [math][0, 1][/math], and only pixels with intensities larger than 0.5 were considered. The coordinate system was set at the centre of the image. PointNet++ achieved a classification error of 0.51%. The original PointNet had 0.78% classification error. The table below compares these results to the state-of-the-art.

In addition, the ModelNet40 dataset was used. This dataset consists of CAD models. Three dimensional point clouds were sampled from mesh surfaces of the ModelNet40 shapes. The classification results from this dataset are shown below.

An experiment was performed to show how the accuracy was affected by the number of points used. With PointNet++ using multi-scale grouping and dropout, the performance decreased by less than 1% when 1024 test points was reduced to 256.

# Sources

1. Charles R. Qi, Li Yi, Hao Su, Leonidas J. Guibas. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space, 2017

2. Charles R. Qi, Hao Su, Kaichun Mo, Leonidas J. Guibas. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation, 2017