question Answering with Subgraph Embeddings: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 41: Line 41:
L_i(\theta_i) = \mathbb E_{(s,a,r)}[( \mathbb E_{(s')} [y|s,a]-Q(s,a;\theta_{i}))^{2}] = \mathbb E_{(s,a,r,s')} [(y-Q(s,a;\theta_{i}))^{2}]+\mathbb E_{(s,a,r)}[\mathbb V_{(s')} [y]]
L_i(\theta_i) = \mathbb E_{(s,a,r)}[( \mathbb E_{(s')} [y|s,a]-Q(s,a;\theta_{i}))^{2}] = \mathbb E_{(s,a,r,s')} [(y-Q(s,a;\theta_{i}))^{2}]+\mathbb E_{(s,a,r)}[\mathbb V_{(s')} [y]]
</math>
</math>
:<math>[\mathbf{A}^\mathrm{T}]_{ij} = [\mathbf{A}]_{ji}</math>


===Representing Candidate Answers===
===Representing Candidate Answers===

Revision as of 21:23, 9 November 2015

Introduction

Teaching machines are you answer questions automatically in a natural language has been a long standing goal in AI. There has been a rise in large scale structured knowledge bases (KBs), such as Freebase [3], to tackle the problem known as open-domain question answers (or open QA). However, the scale and difficulty for machines to interpret natural language still makes this problem challenging.

open QA techniques can be classified into two main categories:

  • Information retrieval based: retrieve a broad set of answers be first query the API of the KBs then narrow down the answer using heuristics [8,12,14].
  • Semantic parsing based: focus on the correct interpretation of the query. Querying the interpreted question from the KB should return the correct answer [1,9,2,7].

Both of these approaches require negligible interventions (hand-craft lexicons, grammars and KB schemas) to be effective.

[5] proposed a vectorial feature representation model to this problem. The goal of this paper is to provide an improved model of [5] specifically with the contributions of:

  • A more sophisticated inference procedure that is more efficient and can consider longer paths.
  • A richer representation of of the answers which encodes the question-answer path and surround subgraph of the KB.

Task Definition

Motivation is to provide a system for open QA able to be trained as long as:

  • A training set of questions paired with answers.
  • A KB providing a structure among answers.

WebQuestions [1] was used for evaluation benchmark. WebQuestions only contains a few samples, so it was not possible to train the system on only this dataset. The following describes the data sources used for training.

  • WebQuestions: the dataset built using Freebase as the KB and contains 5810 question-answer pairs. It was created by crawling questions through the Google Suggest API and then obtaining answers using Amazon Mechanical Turk (Turkers was allowed to only use Freebase as the querying tool).
  • Freebase: is a huge database of general facts that are organized in triplets (subject, type1.type2.predicate, object). The form of the data from Freebase does not correspond to a structure found in natural language and so the questions were converted using the following format: "What is the predicate of the type2 subject"? Note that all data from Freebase will have a fixed format and this is not realistic (in terms of a NL).
  • ClubWeb Extractions: The team also used ClueWeb extractions as per [1] and [10]. ClueWeb has the format (subject, "text string", object) and it was ensured that both the subject and object was linked to Freebase. These triples were also converted into questions using simple patters and Freebase types.
  • Paraphrases: automatically generated sentences have a rigid format and semi-automatic wording which does not provide a satisfactory modelling of natural language. To overcome this, the team made supplemented their data with paraphrases collected from WikiAnswers. Users on WikiAnswers can tag sentences as a rephrasing of each other: [6] harvest 2M distinct questions from WikiAnswers which were grouped into 350k paraphrase clusters.

Table 2 shows some examples sentences from each dataset category.

Embedding Questions and Answers

We wish to train our model such that representations of questions and their corresponding answers are close to each other in the joint embedding space. Let q denote a question and a denote an answer. Learning embeddings is achieved by learning a score function S(q, a) so that S generates a high score if a is the correct answer to q, and a low score otherwise.

[math]\displaystyle{ S(q, a) = f(q)^Tg(a), }[/math]

[math]\displaystyle{ S(q, a) = f(q)^Tg(a), }[/math]

[math]\displaystyle{ L_i(\theta_i) = \mathbb E_{(s,a,r)}[( \mathbb E_{(s')} [y|s,a]-Q(s,a;\theta_{i}))^{2}] = \mathbb E_{(s,a,r,s')} [(y-Q(s,a;\theta_{i}))^{2}]+\mathbb E_{(s,a,r)}[\mathbb V_{(s')} [y]] }[/math]

[math]\displaystyle{ [\mathbf{A}^\mathrm{T}]_{ij} = [\mathbf{A}]_{ji} }[/math]


Representing Candidate Answers

Training and Loss Function

Multitask Training of Embeddings

Inference

Experiments

Conclusion