Mapping Images to Scene Graphs with Permutation-Invariant Structured Prediction

From statwiki
Revision as of 02:12, 6 November 2018 by I2li (talk | contribs) (Created page with "=Motivation= In the field of artificial intelligence, a major goal is to enable machines to understand complex images, such as the underlying relationships between objects tha...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Motivation

In the field of artificial intelligence, a major goal is to enable machines to understand complex images, such as the underlying relationships between objects that exist in each scene. Although there are models today that capture both complex labels and interactions between labels, there is a disconnect for what guidelines should be used when leveraging deep learning. This paper introduces a design principle for such models that stem from the concept of permutation invariance, and proves state of the art performance on models that follow this principle.

The primary contributions that this paper makes include:

  1. Deriving sufficient and necessary conditions for respecting graph-permutation invariance in deep structured prediction architectures
  2. Empirically proving the benefit of graph-permutation invariance
  3. Developing a state-of-the-art model for scene graph predictions over large set of complex visual scenes

Introduction

In order for a machine to interpret complex visual scenes, it must recognize and understand both objects and relationships between the objects in the scene. A scene graph is a representation of the set of objects and relations that exist in the scene, where objects are represented as nodes and relations are represented as edges connecting the different nodes. Hence, the prediction of the scene graph is analogous to inferring the joint set of objects and relations of a visual scene.

Given that objects in scenes are interdependent on each other, joint prediction of the objects and relations is necessary. The field of structured prediction, which involves the general problem of inferring multiple inter-dependent labels, is of interest for this problem.

In structured prediction models, a score function [math]\displaystyle{ s(x, y) }[/math] is defined to evaluate the compatibility between label [math]\displaystyle{ y }[/math] and input [math]\displaystyle{ x }[/math]. For instance, when interpreting the scene of an image, [math]\displaystyle{ x }[/math] refers to the image itself, and [math]\displaystyle{ y }[/math] refers to a complex label, which contains both the objects and the relations between objects. As with most other inference methods, the goal is to find the label [math]\displaystyle{ y* }[/math] such that [math]\displaystyle{ s(x,y) }[/math] is maximized. However, the major concern is that the space for possible label assignments grows exponentially with respect to input size. For example, although an image may seem very simple, the corpus containing possible labels for objects may be very large, rendering it difficult to optimize the scoring function.

The paper presents an alternative approach, for which input [math]\displaystyle{ x }[/math] is mapped to structured output [math]\displaystyle{ y }[/math] using a "black box" neural network, omitting the definition of a score function. The main concern for this approach is the determination of the network architecture.

Structured prediction