STAT946F17/ Teaching Machines to Describe Images via Natural Language Feedback

From statwiki
Jump to navigation Jump to search

Introduction

In the era of Artificial Intelligence, one should ideally be able to educate the robot about its mistakes, possibly without needing to dig into the underlying software. Reinforcement learning has become a standard way of training artificial agents that interact with an environment. Several works explored the idea of incorporating humans in the learning process, in order to help the reinforcement learning agent to learn faster. In most cases, the guidance comes in the form of a simple numerical (or “good”/“bad”) reward. In this work, natural language is used as a way to guide an RL agent. The author argues that a sentence provides a much stronger learning signal than a numeric reward in that we can easily point to where the mistakes occur and suggest how to correct them.

Here the goal is to allow a non-expert human teacher to give feedback to an RL agent in the form of natural language, just as one would to a learning child. The author has focused on the problem of image captioning in which the quality of the output can easily be judged by non-experts.

Related Works

Several works incorporate human feedback to help an RL agent learn faster.

  1. Thomaz et al. [2006] exploits humans in the loop to teach an agent to cook in a virtual kitchen. The users watch the agent learn and may intervene at any time to give a scalar reward. Reward shaping (Ng et al. [1999]) is used to incorporate this information in the Markov Decision Process (MDP).
  2. Judah et al. [2010] iterates between “practice”, during which the agent interacts with the real environment, and a critique session where a human labels any subset of the chosen actions as good or bad.
  3. Griffith et al. [2013] proposes policy shaping which incorporates right/wrong feedback by utilizing it as direct policy labels.

Above approaches mostly assume that humans provide a numeric reward. A few attempts have been made to advise an RL agent using language.

  1. Maclin et al. [1994] translated advice to a short program which was then implemented as a neural network. The units in this network represent Boolean concepts, which recognize whether the observed state satisfies the constraints given by the program. In such a case, the advice network will encourage the policy to take the suggested action.
  2. Weston et al. [2016] incorporates human feedback to improve a text-based question answering agent.
  3. Kaplan et al. [2017] exploits textual advice to improve training time of the A3C algorithm in playing an Atari game.

The Phrase-based Image Captioning Model is similar to most image captioning models except that it exploits attention and linguistic information. Several recent approaches trained the captioning model with policy gradients in order to directly optimize for the desired performance metrics. This work follows the same line.

There is also similar efforts on dialogue based visual representation learning and conversation modeling. These models aim to mimic human-to-human conversations while in this work the human converses with and guides an artificial learning agent.

Methodology

The framework consists of a new phrase-based captioning model trained with Policy Gradients that incorporates natural language feedback provided by a human teacher. The phrase-based captioning model allows natural guidance by a nonexpert.

Phrase-based Image Captioning

The captioning model uses a hierarchical Recurrent Neural Network. The model is composed of a two-level LSTM, a phrase RNN at the top level, and a word RNN that generates a sequence of words for each phrase. One can think of the phrase RNN as providing a “topic” at each time step, which instructs the word RNN what to talk about. The structure of the model is explained through the following figure.

A convolutional neural network is used in order to extract a set of feature vectors $a = (a_1, \dots, a_n)$, with $a_j$ a feature in location j in the input image. These feature vectors are given to the attention layer. There are also two more inputs to the attention layer, current hidden state of the phrase-RNN and output of the label unit. The label unit predicts one out of four possible phrase labels, i.e., a noun (NP), preposition (PP), verb (VP), conjunction phrase (CP), and an additional <EOS> token to indicate the end of the sentence. This information could be useful for the attention layer. For example, when we have a NP the model may look at objects in the image, while for VP it may focus on more global information. Computations can be expressed with the following equations:

$$ \small{\textrm{hidden state of the phrase-RNN at time step t}} \leftarrow h_t = f_{phrase}(h_{t-1}, l_{t-1}, c_{t-1}, e_{t-1}) \\ \small{\text{output of the label unit}} \leftarrow l_t = softmax(f_{phrase-label}(h_t)) \\ \small{\text{output of the attention layer}} \leftarrow c_t = f_{att}(h_t, l_t, a) $$

After deciding about phrases, the outputs of phrase-RNN go to another LSTM to produce words for each phrase. $w_{t,i}$ denotes the i-th word output of the word-RNN in the t-th phrase. There is an additional <EOP> token in word-RNN’s vocabulary, which signals the end-of-phrase. Furthermore, $h_{t,i}$ denotes the i-th hidden state of the word-RNN for the t-th phrase. $$ h_{t,i} = f_{word}(h_{t,i-1}, c_t, w_{t,i}) \\ w_{t,i} = f_{out}(h_{t,i}, c_t, w_{t,i-1}) \\ e_t = f_{word-phrase}(w_{t,1}, \dots ,w_{t,n}) $$

Note that $e_t$ encodes the generated phrase via simple mean-pooling over the words, which provides additional word-level context to the next phrase.

Crowd-sourcing Human Feedback

The author has created a web interface that allows to collect feedback information. Two rounds of annotation are designed. In the first round, the annotator is shown a captioned image and is asked to assess the quality of the caption, by choosing between: perfect, acceptable, grammar mistakes only, minor or major errors. We asked the annotators to choose minor and major error if the caption contained errors in semantics. We advised them to choose minor for small errors such as wrong or missing attributes or awkward prepositions, and go with major errors whenever any object or action naming is wrong.

For the next (more detailed, and thus more costly) round of annotation, we only select captions which are not marked as either perfect or acceptable in the first round. Since these captions contain errors, the new annotator is required to provide detailed feedback about the mistakes. Annotators are asked to"

  1. Choose the type of required correction (something “ should be replaced”, “is missing”, or “should be deleted”)
  2. Write feedback in natural language (annotators are asked to describe a single mistake at a time)
  3. Mark the type of mistake (whether the mistake corresponds to an error in object, action, attribute, preposition, counting, or grammar)
  4. Highlight the word/phrase that contains the mistake
  5. Correct the chosen word/phrase
  6. Evaluate the quality of the caption after correction (it could be bad even after one round of correction)


In above figures, the left one shows the statics of the evaluations before and after one round of correction task. The right figure depicts the interface and an example of caption correction.

Feedback Network

The collected feedback provides strong supervisory signal which can be used in the RL framework. In particular, the authors design a neural network (feedback network or FBN) which will provide additional reward based on the feedback sentence.

RL training will require us to generate samples (captions) from the model. Thus, during training, the sampled captions for each training image will change (will differ from the reference maximum likelihood estimation caption for which we obtained feedback for). The goal of the feedback network is to read a newly sampled caption, and judge the correctness of each phrase conditioned on the feedback. This network performs the following computations:


$$ h_t^{caption} = f_{sent}(h_{t-1}^{caption}, \omega_t^c) \\ h_t^{feedback} = f_{sent}(h_{t-1}^{feedback}, \omega_t^f) \\ q_i = f_{phrase}(\omega_{i,1}^c, \omega_{i,2}^c, \dots, \omega_{i,N}^c) \\ o_i = f_{fbn}(h_T^{caption}, h_T^{feedback }, q_i, m) \\ $$

Here, $\omega_t^c$ and $\omega_t^f$ denote the one-hot encoding of words in the sampled caption and feedback sentence in the t-th phrase, respectively. FBN encodes both the caption and feedback using an LSTM ($f_{sent}$, with shared parameters), performs mean pooling ($f_{phrase}$) over the words in a phrase to represent the phrase i with $q_i$, and passes this information through a 3-layer MLP ($f_{fbn}$). The MLP accepts additional information about the mistake type (e.g., wrong object/action) encoded as a one hot vector m.

Policy Gradient Optimization using Natural Language Feedback

One can think of an caption decoder as an agent following a parameterized policy $p_\theta$ that selects an action at each time step. An “action” in our case requires choosing a word from the vocabulary (for the word RNN), or a phrase label (for the phrase RNN). The objective for learning the parameters of the model is the expected reward received when completing the caption $w^s = (w^s_1, \dots ,w^s_T)$ ($w_t^s$ is the word sampled from the model at time step t):

$$ L(\theta) = -\mathop{{}\mathbb{E}}_{\omega^s \sim p_\theta}[r(w^s)] $$ To optimize this objective, we need the gradient: $$ \nabla_\theta L(\theta) = - \mathop{{}\mathbb{E}}_{\omega^s \sim p_\theta}[r(w^s)\nabla_\theta \log p_\theta(w^s)] $$ Which is estimated using a single Monte-Carlo sample: $$ \nabla_\theta L(\theta) \approx - r(w^s)\nabla_\theta \log p_\theta(w^s) $$ Then a baseline $b = r(\hat \omega)$ is used. A baseline does not change the expected gradient but can drastically reduce the variance. $$ \hat{\omega}_t = argmax \ p(\omega_t|h_t) \\ \nabla_\theta L(\theta) \approx - (r(w^s) - r(\hat{\omega}))\nabla_\theta \log p_\theta(w^s) $$ Reward: First, a sentence reward is defined as a weighted sum of the BLEU scores (more information about BLUE score here).

$$ r(\omega^s) = \beta \sum_i \lambda_i \cdot BLEU_i(\omega^s, ref) $$

As reference captions to compute the reward, the authors either use the reference captions generated by a snapshot of the model which were evaluated as not having minor and major errors, or ground-truth captions. In addition, they weigh the reward by the caption quality as provided by the annotators (e.g. $\beta = 1$ for perfect and $\beta = 0.8$ for acceptable). They further incorporate the reward provided by the feedback network: $$ r(\omega_t^p) = r(\omega^s) + \lambda_f f_{fbn}(\omega^s, feedback, \omega_t^p) $$ Where $w^p_t$ denotes the sequence of words in the t-th phrase. Note that FBN produces a classification of each phrase. We convert this into reward, by assigning correct to 1, wrong to -1, and 0 to not relevant. So the final gradient takes the following form: $$ \nabla_\theta L(\theta) = - \sum_{p=1}^{P}(r(w^p) - r(\hat{\omega}^p))\nabla_\theta \log p_\theta(w^p) $$

Experimental Results

The authors used MS-COCO dataset. They used 82K images for training, 2K for validation, and 4K for testing. To collect feedback, they randomly chose 7K images from the training set, as well as all 2K images from validation. In addition, they use a word vocabulary size of 23,115.

Phrase-based captioning model

Feedback network

RL with Natural Language Feedback

Conclusion

In this paper, a human teacher is enabled to provide feedback to the learning agent in the form of natural language. The authors focused on the problem of image captioning. They proposed a hierarchical phrase-based RNN as the captioning model, which allowed natural integration with human feedback. They also crowd-sourced feedback, and showed how to incorporate it in policy gradient optimization.

References