Unsupervised Neural Machine Translation

From statwiki
Revision as of 17:55, 20 November 2018 by Lwali (talk | contribs) (Background)
Jump to: navigation, search


The paper presents an unsupervised method to machine translation using only monoligual corpora without any alignment between sentences or documents. Monoligual corpora are text corpora that is made up of one language only. This contrasts with the usual translation approach that uses parallel corpora, where two corpora are the direct translation of each other and the translations are aligned by words or sentences. This problem is important as there are a large number of languages that lack parallel pairing, e.g. for German-Russian.

The general approach of the methodology is to:

  1. Using monolingual corpora in the source and target languages to learn source and target word embeddings.
  2. Align the 2 sets of word embeddings in the same latent space.

Then iteratively perform:

  1. Train an auto-encoder to reconstruct noisy versions of sentence embeddings for both source and target language, where the encoder is shared and the decoder is different in each language.
  2. Tune the decoder in each language by back-translating between the source and target language.


Word Embedding Alignment

The paper uses word2vec [Mikolov, 2013] to convert each monoligual corpora to vector enbeddings. These embeddings have been shown to contain the contextual and syntactic features independent of language, and so in theory there could exist a linear map that maps the embeddings from language L1 to language L2.

Figure 2 shows the word embeddings in English and French (a & b), and (c) shows the aligned word embeddings after some linear transformation.

  • insert Figure2

The paper uses the methodology proposed by [Artetxe, 2017] to do cross-lingual embedding aligning in an unsupervised manner and without parallel data. Without going into the details, the general approach of this paper is starting from a seed dictionary of numeral pairings (e.g. 1-1, 2-2, etc.), to iteratively learn the mapping between 2 language embeddings, while concurrently improving the dictionary at each iteration.


The corpora data is first processed in a standard way to tokenize and case the words. The words are then converted to word embeddings using word2vec with 300 dimensions, and then aligned between languages using the method proposed by [Artetxe, 2017]. The alignment method proposed by [Artetxe, 2017] is also used as a baseline to evaluate this model as discussed later in Results.

The translation model uses a standard encoder-decoder model with attention. The encoder is a 2-layer bidirectional RNN, and the decoder is a 2 layer RNN. All RNNs use GRU cells with 600 hidden units. The encoder is shared by the source and target language, while the decoder is different by language.

  • insert Figure1*

The translation model iteratively improves the encoder and decoder by performing 2 tasks: Denoising, and Back-translation.


Random noise is added to the input sentences in order to allow the model to learn some structure of languages. Without noise, the model would simply learn to copy the input word by word. Noise also allows the shared encoder to compose the embeddings of both languages in a language-independent fashion, and then be decoded by the language dependent decoder.

Denoising works to reconstruct a noisy version of the same language back to the original sentence. In mathematical form, if [math]x[/math] is a sentence in language L1:

  1. Construct [math]C(x)[/math], noisy version of [math]x[/math],
  2. Input [math]C(x)[/math] into the current iteration of the shared encoder and use decoder for L1 to get reconstructed [math]\hat{x}[/math].

The training objective is to minimize the cross entropy loss between [math]{x}[/math] and [math]\hat{x}[/math].

The proposed noise function is to perform [math]N/2[/math] random swaps of words that are near each other, where [math]N[/math] is the number of words in the sentence.


With only denoising, the system doesn't have a goal to improve the actual translation. Back-translation works by using the decoder of the target language to create a translation, then encoding this translation and decoding again using the source decoder to reconstruct a the original sentence. In mathematical form, if [math]C(x)[/math] is a noisy version of sentence [math]x[/math] in language L1:

  1. Input [math]C(x)[/math] into the current iteration of shared encoder and the decoder in L2 to construct translation [math]y[/math] in L1,
  2. Construct [math]C(y)[/math], noisy version of translation [math]y[/math],
  3. Input [math]C(y)[/math] into the current iteration of shared encoder and the decoder in L1 to reconstruct [math]\hat{x}[/math] in L1.

The training objective is to minimize the cross entropy loss between [math]{x}[/math] and [math]\hat{x}[/math].


Training is done by alternating these 2 objectives from mini-batch to mini-batch. Each iteration would perform one mini-batch of denoising for L1, another one for L2, one mini-batch of back-translation from L1 to L2, and another one from L2 to L1. The procedure is repeated until convergence. During decoding, greedy decoding was used at training time for back-translation, but actual inference at test time was done using beam-search with a beam size of 12.

Optimizer choice and other hyperparameters can be found in the paper.




Other Sources


  1. [Mikolov, 2013]Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. "Distributed representations of words and phrases and their compositionality."
  1. [Artetxe, 2017] Mikel Artetxe, Gorka Labaka, Eneko Agirre, "Learning bilingual word embeddings with (almost) no bilingual data".
  1. [Gouws,2016] Stephan Gouws, Yoshua Bengio, Greg Corrado, "BilBOWA: Fast Bilingual Distributed Representations without Word Alignments."
  1. [Sennrich,2016] Rico Sennrich and Barry Haddow and Alexandra Birch, "Neural Machine Translation of Rare Words with Subword Units."