Unsupervised Learning of Optical Flow via Brightness Constancy and Motion Smoothness: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 16: Line 16:
Both of these assumptions are derived from real-world implications. Firstly, the time between two consecutive frames of a video are so minuscule, such that it becomes extremely improbable for the intensity of a pixel to completely change, even if its location has changed. Secondly, pixels do not teleport. The assumption that groups of pixels move together implies that there is spacial coherence or smoothing between objects.  
Both of these assumptions are derived from real-world implications. Firstly, the time between two consecutive frames of a video are so minuscule, such that it becomes extremely improbable for the intensity of a pixel to completely change, even if its location has changed. Secondly, pixels do not teleport. The assumption that groups of pixels move together implies that there is spacial coherence or smoothing between objects.  


The current mainstream approach to solving optimal flow problems, albeit widely successful, has been a result of supervised learning methods using convolutional neural networks (convnets). The inherent challenge with these supervised learning approaches lies in the groundtruth flow, the process of gathering provable data for the measure of the target variable for the training and testing datasets. However, optical flow ground-truth is not possible and instead, segmentation ground-truthing is generally used. Since the segmentation ground-truthing isn't always automated, it requires laborious labeling of items in the video, sometimes manually using a ground-truth labeling software. In the case of the KITTI dataset, a collection of images captured from driving cars around a mid-sized city in Germany, accurate ground truth for the training and testing data is obtained using high-tech laser scanners, as well as a GPS localization device installed onto the top of the cars. However, directly obtaining the motion field groundtruth from real scenes — the quantity that optical flow attempts to approximate — is not possible.
The current mainstream approach to solving optimal flow problems, albeit widely successful, has been a result of supervised learning methods using convolutional neural networks (convnets). The inherent challenge with these supervised learning approaches lies in the groundtruth flow, the process of gathering provable data for the measure of the target variable for the training and testing datasets. However, directly obtaining the motion field ground-truth is not possible and instead, segmentation ground-truthing is generally used. Since the segmentation ground-truthing isn't always automated, it requires laborious labeling of items in the video, sometimes manually using a ground-truth labeling software. Then as the training and test datasets become larger in size, the more laborious the ground-truthing becomes. In the case of the KITTI dataset, a collection of images captured from driving cars around a mid-sized city in Germany, accurate ground truth for the training and testing data is obtained using high-tech laser scanners, as well as a GPS localization device installed onto the top of the cars. However,from real scenes — the quantity that optical flow attempts to approximate — is not possible.

Revision as of 00:31, 20 November 2018

Presented by

  • Hudson Ash
  • Stephen Kingston
  • Richard Zhang
  • Alexandre Xiao
  • Ziqiu Zhu

Problem & Motivation

Optical flow is the apparent motion of image brightness patterns in objects, surfaces and edges in videos. In more laymen terms, it tracks the change in position of pixels between two frames caused by the movement of the object or the camera, and it does this on the basis of two assumptions:

1. Pixel intensities do not change rapidly between frames (brightness constancy).

2. Groups of pixels move together (motion smoothness).

Both of these assumptions are derived from real-world implications. Firstly, the time between two consecutive frames of a video are so minuscule, such that it becomes extremely improbable for the intensity of a pixel to completely change, even if its location has changed. Secondly, pixels do not teleport. The assumption that groups of pixels move together implies that there is spacial coherence or smoothing between objects.

The current mainstream approach to solving optimal flow problems, albeit widely successful, has been a result of supervised learning methods using convolutional neural networks (convnets). The inherent challenge with these supervised learning approaches lies in the groundtruth flow, the process of gathering provable data for the measure of the target variable for the training and testing datasets. However, directly obtaining the motion field ground-truth is not possible and instead, segmentation ground-truthing is generally used. Since the segmentation ground-truthing isn't always automated, it requires laborious labeling of items in the video, sometimes manually using a ground-truth labeling software. Then as the training and test datasets become larger in size, the more laborious the ground-truthing becomes. In the case of the KITTI dataset, a collection of images captured from driving cars around a mid-sized city in Germany, accurate ground truth for the training and testing data is obtained using high-tech laser scanners, as well as a GPS localization device installed onto the top of the cars. However,from real scenes — the quantity that optical flow attempts to approximate — is not possible.