Unsupervised Neural Machine Translation

From statwiki
Revision as of 18:48, 20 November 2018 by Lwali (talk | contribs) (→‎Results)
Jump to navigation Jump to search

Introduction

The paper presents an unsupervised method to machine translation using only monoligual corpora without any alignment between sentences or documents. Monoligual corpora are text corpora that is made up of one language only. This contrasts with the usual translation approach that uses parallel corpora, where two corpora are the direct translation of each other and the translations are aligned by words or sentences. This problem is important as there are a large number of languages that lack parallel pairing, e.g. for German-Russian.

The general approach of the methodology is to:

  1. Using monolingual corpora in the source and target languages to learn source and target word embeddings.
  2. Align the 2 sets of word embeddings in the same latent space.

Then iteratively perform:

  1. Train an auto-encoder to reconstruct noisy versions of sentence embeddings for both source and target language, where the encoder is shared and the decoder is different in each language.
  2. Tune the decoder in each language by back-translating between the source and target language.

Background

Word Embedding Alignment

The paper uses word2vec [Mikolov, 2013] to convert each monoligual corpora to vector enbeddings. These embeddings have been shown to contain the contextual and syntactic features independent of language, and so in theory there could exist a linear map that maps the embeddings from language L1 to language L2.

Figure 2 shows the word embeddings in English and French (a & b), and (c) shows the aligned word embeddings after some linear transformation.

  • insert Figure2

The paper uses the methodology proposed by [Artetxe, 2017] to do cross-lingual embedding aligning in an unsupervised manner and without parallel data. Without going into the details, the general approach of this paper is starting from a seed dictionary of numeral pairings (e.g. 1-1, 2-2, etc.), to iteratively learn the mapping between 2 language embeddings, while concurrently improving the dictionary at each iteration.

Methodology

The corpora data is first processed in a standard way to tokenize and case the words. The words are then converted to word embeddings using word2vec with 300 dimensions, and then aligned between languages using the method proposed by [Artetxe, 2017]. The alignment method proposed by [Artetxe, 2017] is also used as a baseline to evaluate this model as discussed later in Results.

The translation model uses a standard encoder-decoder model with attention. The encoder is a 2-layer bidirectional RNN, and the decoder is a 2 layer RNN. All RNNs use GRU cells with 600 hidden units. The encoder is shared by the source and target language, while the decoder is different by language.

  • insert Figure1*

The translation model iteratively improves the encoder and decoder by performing 2 tasks: Denoising, and Back-translation.

Denoising

Random noise is added to the input sentences in order to allow the model to learn some structure of languages. Without noise, the model would simply learn to copy the input word by word. Noise also allows the shared encoder to compose the embeddings of both languages in a language-independent fashion, and then be decoded by the language dependent decoder.

Denoising works to reconstruct a noisy version of the same language back to the original sentence. In mathematical form, if [math]\displaystyle{ x }[/math] is a sentence in language L1:

  1. Construct [math]\displaystyle{ C(x) }[/math], noisy version of [math]\displaystyle{ x }[/math],
  2. Input [math]\displaystyle{ C(x) }[/math] into the current iteration of the shared encoder and use decoder for L1 to get reconstructed [math]\displaystyle{ \hat{x} }[/math].

The training objective is to minimize the cross entropy loss between [math]\displaystyle{ {x} }[/math] and [math]\displaystyle{ \hat{x} }[/math].

The proposed noise function is to perform [math]\displaystyle{ N/2 }[/math] random swaps of words that are near each other, where [math]\displaystyle{ N }[/math] is the number of words in the sentence.

Back-Translation

With only denoising, the system doesn't have a goal to improve the actual translation. Back-translation works by using the decoder of the target language to create a translation, then encoding this translation and decoding again using the source decoder to reconstruct a the original sentence. In mathematical form, if [math]\displaystyle{ C(x) }[/math] is a noisy version of sentence [math]\displaystyle{ x }[/math] in language L1:

  1. Input [math]\displaystyle{ C(x) }[/math] into the current iteration of shared encoder and the decoder in L2 to construct translation [math]\displaystyle{ y }[/math] in L1,
  2. Construct [math]\displaystyle{ C(y) }[/math], noisy version of translation [math]\displaystyle{ y }[/math],
  3. Input [math]\displaystyle{ C(y) }[/math] into the current iteration of shared encoder and the decoder in L1 to reconstruct [math]\displaystyle{ \hat{x} }[/math] in L1.

The training objective is to minimize the cross entropy loss between [math]\displaystyle{ {x} }[/math] and [math]\displaystyle{ \hat{x} }[/math].

Training

Training is done by alternating these 2 objectives from mini-batch to mini-batch. Each iteration would perform one mini-batch of denoising for L1, another one for L2, one mini-batch of back-translation from L1 to L2, and another one from L2 to L1. The procedure is repeated until convergence. During decoding, greedy decoding was used at training time for back-translation, but actual inference at test time was done using beam-search with a beam size of 12.

Optimizer choice and other hyperparameters can be found in the paper.

Experiments and Results

The model is evaluated using the Bilingual Evaluation Understudy(BLEU) Score, which is typically used to evaluate the quality of the translation, using a reference (groud-truth) translation.

The paper runs the translation model under 3 different settings to compare the performance (Table 1):

Unsupervised

The model only has access to monolingual corpora, using the News Crawl corpus with articles from 2007 to 2013. The baseline for unsupervised is the method proposed by [Artetxe, 2017], which was the unsupervised word vector alignment method discussed in the Background section.

The paper also adds each component piece-wise when doing evaluation to test the impact each piece has on the final score. The authors also experiment with an additional way of translation using Byte-Pair Encoding(BPE) [Sennrich, 2016], where the translation is done by sub-words instead of words. BPE is often used to improve rare-word translations.

As shown in Table1, Unsupervised results compared to the baseline of word-by-word results are strong, with improvement between 40% to 140%. Results also show that back-translation is essential. Denoising doesn't show a big improvement however it is required for back-translation, because otherwise back-translation would translate nonsensical sentences.

For the BPE experiment, results show it helps in some language pairs but detracts in some other language pairs. This is because while BPE helped to translate some rare words, it increased the error rates in other words.

Semi-supervised

Since there is often some small parallel data but not enough to train a Neural Machine Translation system, the authors test a semi-supervised setting with the same monolingual data from the unsupervised settings together with either 10,000 or 100,000 random sentence pairs from the News Commentary parallel corpus. The supervision is included to improve the model during the back-translation stage to directly predict sentences that are in the parallel corpus.

Table1 shows that the model can greatly benefit from addition of a small parallel corpus to the monolingual corpora. It is surprising that semi-supervised in row 6 outperforms supervised in row 7, one possible explanation is that both semi-supervised training set and the test set belong to the news domain, whereas the supervised training set is all domains of corpora.


Supervised

This setting provides an upper bound to the unsupervised proposed system. The data used was the combination of all parallel corpora provided at WMT 2014.

The Comparable NMT was trained using the same proposed model except it does not use monolingual corpora, and consequently it was trained without denoising and back-translation. The proposed model under supervised setting does much worse than the state of the NMT in row 10, which suggests that adding the additional constraints to enable unsupervised learning also limits the potential performance.

  • Insert Table1*

Qualitative Analysis

  • Insert Table2*

Table 2 shows 4 examples of French to English translations. Example 1 and 2 show that the model is able to model structural differences in the languages (ex.e, it correctly translates "l’aeroport international de Los Angeles" as "Los Angeles International Airport", and it is capable of producing high quality translations of long and more complex sentences. However in Example 3 and 4, the system failed to translate the months and numbers correctly and having difficulty with comprehending odd sentence structures.

Conclusions and Future Work

Critique


Other Sources

References

  1. [Mikolov, 2013]Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. "Distributed representations of words and phrases and their compositionality."
  1. [Artetxe, 2017] Mikel Artetxe, Gorka Labaka, Eneko Agirre, "Learning bilingual word embeddings with (almost) no bilingual data".
  1. [Gouws,2016] Stephan Gouws, Yoshua Bengio, Greg Corrado, "BilBOWA: Fast Bilingual Distributed Representations without Word Alignments."
  1. [Sennrich,2016] Rico Sennrich and Barry Haddow and Alexandra Birch, "Neural Machine Translation of Rare Words with Subword Units."