STAT946F17/ Teaching Machines to Describe Images via Natural Language Feedback: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 25: Line 25:
=== Phrase-based Image Captioning ===
=== Phrase-based Image Captioning ===
The captioning model uses a hierarchical Recurrent Neural Network. The model is composed of a two-level LSTM, a phrase RNN at the top level, and a word RNN that generates a sequence of words for each phrase. One can think of the phrase RNN as providing a “topic” at each time step, which instructs the word RNN what to talk about. The structure of the model is explained through the following figure.
The captioning model uses a hierarchical Recurrent Neural Network. The model is composed of a two-level LSTM, a phrase RNN at the top level, and a word RNN that generates a sequence of words for each phrase. One can think of the phrase RNN as providing a “topic” at each time step, which instructs the word RNN what to talk about. The structure of the model is explained through the following figure.
[[File:Model_hamid.jpg]|650px]
 
[[File:Model_hamid.jpg|center|500px]]
 
A convolutional neural network is used in order to extract a set of feature vectors $a = (a_1, \dots, a_n)$, with $a_j$ a feature in location j in the input image. These feature vectors are given to the attention layer. There are also two more inputs to the attention layer, current hidden state of the phrase-RNN and output of the label unit. The label unit predicts one out of four possible phrase labels, i.e., a noun (NP), preposition (PP), verb (VP), conjunction phrase (CP), and an additional <EOS> token to indicate the end of the sentence. This information could be useful for the attention layer. For example, when we have a NP the model may look at objects in the image, while for VP it may focus on more global information.


=== Crowd-sourcing Human Feedback ===
=== Crowd-sourcing Human Feedback ===

Revision as of 19:54, 1 November 2017

Introduction

In the era of Artificial Intelligence, one should ideally be able to educate the robot about its mistakes, possibly without needing to dig into the underlying software. Reinforcement learning has become a standard way of training artificial agents that interact with an environment. Several works explored the idea of incorporating humans in the learning process, in order to help the reinforcement learning agent to learn faster. In most cases, the guidance comes in the form of a simple numerical (or “good”/“bad”) reward. In this work, natural language is used as a way to guide an RL agent. The author argues that a sentence provides a much stronger learning signal than a numeric reward in that we can easily point to where the mistakes occur and suggest how to correct them.

Here the goal is to allow a non-expert human teacher to give feedback to an RL agent in the form of natural language, just as one would to a learning child. The author has focused on the problem of image captioning in which the quality of the output can easily be judged by non-experts.

Related Works

Several works incorporate human feedback to help an RL agent learn faster.

  1. Thomaz et al. [2006] exploits humans in the loop to teach an agent to cook in a virtual kitchen. The users watch the agent learn and may intervene at any time to give a scalar reward. Reward shaping (Ng et al. [1999]) is used to incorporate this information in the Markov Decision Process (MDP).
  2. Judah et al. [2010] iterates between “practice”, during which the agent interacts with the real environment, and a critique session where a human labels any subset of the chosen actions as good or bad.
  3. Griffith et al. [2013] proposes policy shaping which incorporates right/wrong feedback by utilizing it as direct policy labels.

Above approaches mostly assume that humans provide a numeric reward. A few attempts have been made to advise an RL agent using language.

  1. Maclin et al. [1994] translated advice to a short program which was then implemented as a neural network. The units in this network represent Boolean concepts, which recognize whether the observed state satisfies the constraints given by the program. In such a case, the advice network will encourage the policy to take the suggested action.
  2. Weston et al. [2016] incorporates human feedback to improve a text-based question answering agent.
  3. Kaplan et al. [2017] exploits textual advice to improve training time of the A3C algorithm in playing an Atari game.

The Phrase-based Image Captioning Model is similar to most image captioning models except that it exploits attention and linguistic information. Several recent approaches trained the captioning model with policy gradients in order to directly optimize for the desired performance metrics. This work follows the same line.

There is also similar efforts on dialogue based visual representation learning and conversation modeling. These models aim to mimic human-to-human conversations while in this work the human converses with and guides an artificial learning agent.

Methodology

The framework consists of a new phrase-based captioning model trained with Policy Gradients that incorporates natural language feedback provided by a human teacher. The phrase-based captioning model allows natural guidance by a nonexpert.

Phrase-based Image Captioning

The captioning model uses a hierarchical Recurrent Neural Network. The model is composed of a two-level LSTM, a phrase RNN at the top level, and a word RNN that generates a sequence of words for each phrase. One can think of the phrase RNN as providing a “topic” at each time step, which instructs the word RNN what to talk about. The structure of the model is explained through the following figure.

A convolutional neural network is used in order to extract a set of feature vectors $a = (a_1, \dots, a_n)$, with $a_j$ a feature in location j in the input image. These feature vectors are given to the attention layer. There are also two more inputs to the attention layer, current hidden state of the phrase-RNN and output of the label unit. The label unit predicts one out of four possible phrase labels, i.e., a noun (NP), preposition (PP), verb (VP), conjunction phrase (CP), and an additional <EOS> token to indicate the end of the sentence. This information could be useful for the attention layer. For example, when we have a NP the model may look at objects in the image, while for VP it may focus on more global information.

Crowd-sourcing Human Feedback

Feedback Network

Policy Gradient Optimization using Natural Language Feedback

Experimental Results

Conclusion