# stat946w18/Unsupervised Machine Translation Using Monolingual Corpora Only

## Introduction

Neural machine translation systems must be trained on large corpora consisting of pairs of pre-translated sentences. This paper proposes an unsupervised neural machine translation system, which can be trained without using any such parallel data.

## Overview of unsupervised translation system

The unsupervised translation system has the following plan:

• Sentences from both the source and target language are mapped to a common latent vector space.
• A de-noising auto-encoder loss encourages the latent space representations of sentences to be insensitive to noise.
• An adversarial loss encourages the latent space representations of source and target sentences to be indistinguishable from each other. The idea is that the latent space representations should reflect the meaning of a sentence, and not the particular language in which it is expressed.
• A reconstruction loss is computed as follows: sample a sentence from one of the languages, and apply the translation model of the previous epoch to translate it to the other language. Then corrupt this translation with noise. The reconstruction loss encourages the model to able to recover the original sampled sentence from its corrupted translation by passing through the latent vector space.

In what follows I will discuss this plan in more detail.

## Notation

Let $S$ denote the set of words in the source language, and let $T$ denote the set of words in the target language. Let $H \subset \mathbb{R}^{n_H}$ denote the latent vector space. Moreover, let $S'$ and $T'$ denote the sets of finite sequences of words in the source and target language, and let $H'$ denote the set of finite sequences of vectors in the latent space. For any set X, elide measure-theoretic details and let $\mathcal{P}(X)$ denote the set of probability distributions over X.

## Word vector alignment

Conneau et al. (2017) describe an unsupervised method for aligning word vectors across languages. By "alignment", I mean that their method groups vectors corresponding to words with similar meanings close to one another, regardless of the language of words. Moreover, if word C is the target-language literal translation of the source language word B, then-- after alignment -- C's word vector tends to be the closest target-language word vector to the word vector of B. This unsupervised alignment method is crucial to the translation scheme of the current paper. From now on we denote by $A: S' \cup T' \to \mathcal{Z}'$ the function that maps source and target language word sequences to their aligned word vectors.

## Encoder

The encoder $E$ reads a sequence of word vectors $(z_1,\ldots, z_m) \in \mathcal{Z}'$ and outputs a sequence of hidden states $(h_1,\ldots, h_m) \in H'$ in the latent space. Crucially, because the word vectors of the two languages have been aligned, the same encoder can be applied to both. That is, to map a source sentence $x=(x_1,\ldots, x_M)\in S'$ to the latent space, we compute $E(A(x))$, and to map a target sentence $y=(y_1,\ldots, y_K)\in T'$ to the latent space, we compute $E(A(y))$.

The encoder consists of two LSTMs, one of which reads the word-vector sequence in the forward direction, and one of which reads it in the backward direction. The hidden state sequence is generated by concatenating the hidden states produced by the forward and backward LSTM at each word vector.

## Decoder

The decoder is a mono-directional LSTM that accepts a sequence of hidden states $h=(h_1,\ldots, h_m) \in H'$ from the latent space and a language and outputs a probability distribution over sequences in that language. We have

\begin{align} D: H' \times \{S,T \} \to \mathcal{P}(S') \cup \mathcal{P}(T'). \end{align}

In detail, the decoder is a mono-directional LSTM that makes use of the attention mechanism of Bahdanau et al. (2014). To compute the probability of a given sentence $y=(y_1,\ldots,y_K)$ , the LSTM processes the sentence one word at a time, accepting at each step $k$ the aligned word vector of the previous word in the sentence $A(y_{k-1})$ and a context vector $c_k\in H$ computed from the hidden sequence $h\in H'$. The LSTM is initiated with a special, language-specific start-of-sequence token. Otherwise, the decoder is does not depend on the language of the sentence it is producing. The context vector is computed as described by Bahdanau et al. (2014), where we let $l_{k}$ denote the hidden state of the LSTM at step $k$, and where $U,W$ are learnable weight matrices, and $v$ is a learnable weight vector: \begin{align} c_k&= \sum_{m=1}^M \alpha_{k,m} h_m\\ \alpha_{k,m}&= \frac{\exp(e_{k,m})}{\sum_{m'=1}^M\exp(e_{k,m'}) },\\ e_{k,m} &= v^T \tanh (Wl_{k-1} + U h_m ). \end{align}

By learning $U,W$ and $v$, the decoder can learn which vectors in the sequence $h$ are relevant to computing which words in the output sequence.

## Overview of objective

The objective function is the sum of three terms:

1. The de-noising auto-encoder loss
2. The translation loss