Unsupervised Learning of Optical Flow via Brightness Constancy and Motion Smoothness: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 78: Line 78:


== The Paper's Approach: UnsupFlownet ==
== The Paper's Approach: UnsupFlownet ==
=== Architecture ===

Revision as of 20:14, 20 November 2018

Presented by

  • Hudson Ash
  • Stephen Kingston
  • Richard Zhang
  • Alexandre Xiao
  • Ziqiu Zhu

Problem & Motivation

The approaches to solving optimal flow problems, albeit widely successful, has mostly been a result of supervised learning methods using convolutional neural networks (convnets). The inherent challenge with these supervised learning approaches lies in the groundtruth flow, the process of gathering provable data for the measure of the target variable for the training and testing datasets. Directly obtaining the motion field groundtruth from real life videos is not possible, but instead, synthetic data is often used.

The paper "Back to Basics: Unsupervised Learning of Optical Flow via Brightness Constancy and Motion Smoothness" by Yu et. al. presents an unsupervised approach to address the groundtruth acquisition challenges of optical flow, by making use of the standard Flownet architecture with a spatial transformer component to devise a "self-supervising" loss function.

Optical Flow

Optical flow is the apparent motion of image brightness patterns in objects, surfaces and edges in videos. In more laymen terms, it tracks the change in position of pixels between two frames caused by the movement of the object or the camera. Most optical flows are estimated on the basis of two assumptions:

1. Pixel intensities do not change rapidly between frames (brightness constancy).

2. Groups of pixels move together (motion smoothness).

Both of these assumptions are derived from real-world implications. Firstly, the time between two consecutive frames of a video are so minuscule, such that it becomes extremely improbable for the intensity of a pixel to completely change, even if its location has changed. Secondly, pixels do not teleport. The assumption that groups of pixels move together implies that there is spatial coherence and that the image motion of objects changes gradually over time, creating motion smoothness.

Given these assumptions, imagine a video frame (which is 2D image) with a pixel at position [math]\displaystyle{ (x,y) }[/math] at some time t, and in later frame, the pixel is now in position [math]\displaystyle{ (x + \Delta x, y + \Delta y) }[/math] at some time [math]\displaystyle{ t + \Delta t }[/math].

Then by the first assumption, the intensity of the pixel at time t is the same as the intensity of the pixel at time [math]\displaystyle{ t + \Delta t }[/math]:

[math]\displaystyle{ I(x+\Delta x,y+\Delta y,t+\Delta t) = I(x,y,t) }[/math]

Using Taylor series, we get:

[math]\displaystyle{ I(x+\Delta x,y+\Delta y,t+\Delta t) = I(x,y,t) + \frac{\partial I}{\partial x}\Delta x+\frac{\partial I}{\partial y}\Delta y+\frac{\partial I}{\partial t}\Delta t }[/math], ignoring the higher order terms.

From the two equations, it follows that:

[math]\displaystyle{ \frac{\partial I}{\partial x}\Delta x+\frac{\partial I}{\partial y}\Delta y+\frac{\partial I}{\partial t}\Delta t = 0 }[/math]

which results in

[math]\displaystyle{ \frac{\partial I}{\partial x}V_x+\frac{\partial I}{\partial y}V_y+\frac{\partial I}{\partial t} = 0 }[/math]

where [math]\displaystyle{ V_x,V_y }[/math] are the [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math] components of the velocity (displacement over time) or optical flow of [math]\displaystyle{ I(x,y,t) }[/math] and [math]\displaystyle{ \tfrac{\partial I}{\partial x} }[/math], [math]\displaystyle{ \tfrac{\partial I}{\partial y} }[/math], and [math]\displaystyle{ \tfrac{\partial I}{\partial t} }[/math] are the derivatives of the image at [math]\displaystyle{ (x,y,t) }[/math] in the corresponding directions.

This can be rewritten as:

[math]\displaystyle{ I_xV_x+I_yV_y=-I_t }[/math]

or

[math]\displaystyle{ \nabla I^T\cdot\vec{V} = -I_t }[/math]

Where [math]\displaystyle{ \nabla I^T }[/math] is known as the spatial gradient

Since this results in one equation with two unknowns [math]\displaystyle{ V_x,V_y }[/math], it results into what is known as the aperture problem of the optical flow algorithms. In order to solve the optical flow problem, another set of constraints are required, which is where assumption 2 can be applied.

Traditional Approaches

Traditional approaches to the optical flow problem consisted of many differential (gradient-based) methods. Horn and Schunck, 1981, being one of the first to create an approach for for optical flow estimation, is one of the classical examples for optical flow estimation. Without diving deep into the math, Horn and Schunk created constraints based on spatio-temporal derivatives of image brightness. Their estimation tries to solve the aperture problem by adding a smoothness condition where that the optical flow field varies smoothly through the entire image (a global motion smoothness). They assume that object motion in a sequence will be rigid and approximately constant, that the objects in a pixel’s neighborhood will have similar velocities, and therefore the object changes smoothly over space and time. The challenges with this approach is that on frames with rougher movements, the accuracy of the estimates dramatically decrease.

Another classical method, Lucas-Kanade, approaches the problem by taking a local motion smoothness assumption. Lucas-Kanade addressed the sensitivity to rough movements in the Horn and Schunk approach by making a local motion smoothness assumption instead of a global motion smoothness. While the Lucas-Kanade estimation reduced sensitivity to rougher movements, it still has inaccuracy in rough frames as a differential method to optical flow estimation.


It wasn't until 2015 that FlowNet [Dosovitskiy et al., 2015] was proposed as the first approach to use a deep neural network for end-to-end optical flow estimation.

Related Works

Spatial Transformer Networks

As Convolutional Neural Networks have been established as the preferred solution in image recognition and computer vision problems, increasing attention has been dedicated to evolving the network architecture to further improve predictive power. One such adaptation is the Spatial Transformer Network, developed by Google DeepMind in 2015.

Spacial invariance is a desired property of any system that deals with visual task, however the basic CNN is not very robust in the presence of input deformations such as scale/translation/rotation variations, viewpoint variations, shape deformations, etc. The introduction of local pooling layers into CNNs have helped address this issue to some degree, by pooling groups of input cells into simpler cells, helping to remove the adverse impact of noise on the input. However, pooling layers are destructive - a standard 2x2 pooling layer discards 75% of the input data, resulting in the loss of exact positional data, which can be very helpful visual recognition tasks. Also, since pooling layers are predefined and non-adaptive, their inclusion may only be helpful in the presence of small deformations; with large transformations, pooling may help provide little to no spatial invariance to the network.

The Spatial Transformer Network (STN) addresses the spatial invariance issues described above by producing an explicit spatial transformation to carve out the target object. Advantageous properties of the STN are as follows:

1. Modular - they can easily be implemented anywhere into an existing CNN

2. Differentiable - they can be trained using backpropagation without modifying the original model

3. Dynamic - they perform a unique spatial transformation on the feature map for each input sample

STNs are composed of three primary components:

1. Localization network: a CNN that outputs the parameters of a spatial transformations

2. Grid Generator: Generates a sampling grid, where transformations from the localization network are applied to this grid

3. Sampler: Samples the input feature map according to the transformed grid and a differentiable interpolation function

The Paper's Approach: UnsupFlownet

Architecture