Dense Passage Retrieval for Open-Domain Question Answering

From statwiki
Revision as of 13:46, 14 November 2020 by X46yan (talk | contribs)
Jump to navigation Jump to search

Presented by

Nicole Yan

1. Introduction

Open domain question answering is a task that finds question answers from a large collection of documents. Nowadays open domain QA systems usually use a two-stage framework: (1) a retriever that selects a subset of documents, and (2) a reader that fully reads the document subset and selects the answer spans. Stage one (1) is usually done through bag-of-words models, which count overlapping words and their frequencies in documents. A common bag-of-words method that has been used for years is BM25, which ranks all documents based on the query terms appearing in each document. Stage one produces a small subset of documents where the answer might appear, and then in stage two, a reader would read the subset and locate the answer spans. Stage two is usually done through neural models, like Bert. While stage two benefits a lot from the recent advancement of language models, stage one still relies on traditional term-based models. This paper tries to improve stage one by using dense retrieval methods, and demonstrates that dense retrieval methods can not only outperform BM25, but also improve the end-to-end QA accuracies.

2. Background

3. Dense Passage Retriever

3.1 Model Architecture Overview

3.2 Training

4. Experimental Setup

5. Retrieval Performance Evaluation

5.1 Main Results

5.2 Ablation Study on Model Training

5.3 Qualitative Analysis

5.4 Run-time Efficiency

6. Experiments: Question Answering

7. Related Work

8. Conclusion

Critiques

References

[1] Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih. Dense Passage Retrieval for Open-Domain Question Answering. EMNLP 2020.