# Introduction

There have been many recent advances in neural generative models for low resolution pixel-based images. Humans however, do not see the world in a grid of pixels and more typically communicate drawings of the things we see using a series of pen strokes that represent components of objects. These pen strokes are similar to the way vector-based images store data. This paper proposes a new method for creating conditional and unconditional generative models for creating these kinds of vector sketch drawings based on recurrent neural networks (RNNs). The paper explores many applications of these kinds of models, especially creative applications and makes available their unique dataset of vector images.

# Related Work

Previous work related to sketch drawing generation includes methods that focussed primarily on converting input photographs into equivalent vector line drawings. Image generating models using neural networks also exist but focussed more on generation of pixel-based imagery. Some recent work has focussed on handwritten character generation using RNNs and Mixture Density Networks to generate continuous data points. This work has been extended somewhat recently to conditionally and unconditionally generate handwritten vectorized Chinese Kanji characters by modelling them as a series of pen strokes. Furthermore, this paper builds on work that employed Sequence-to-Sequence models with Variational Autencoders to model English sentences in latent vector space.

One of the limiting factors for creating models that operate on vector datasets has been the dearth of publicly available data. Previously available datasets include: Sketch, a set of 20K vector drawings; Sketchy, a set of 70K vector drawings; and ShadowDraw, a set of 30K raster images with extracted vector drawings.

# Methodology

### Dataset

The “QuickDraw” dataset used in this research was assembled from 75K user drawings extracted from the game “Quick, Draw!” where users drew objects from one of hundreds of classes in 20 seconds or less. The dataset is split into 70K training samples and 2.5K validation and test samples each and represents each sketch a set of “pen stroke actions”. Each action is provided as a vector in the form $(\Delta x, \Delta y, p_{1}, p_{2}, p_{3})$. For each vector, $\Delta x$ and $\Delta y$ give the movement of the pen from the previous point, with the initial location being the origin. The last three vector elements are a one-hot representation of pen states; $p_{1}$ indicates that the pen is down and a line should be drawn between the current point and the next point, $p_{2}$ indicates that the pen is up and no line should be drawn between the current point and the next point, and $p_{3}$ indicates that the drawing is finished and subsequent points and the current point should not be drawn.

### Sketch-RNN

The model is a Sequence-to-Sequence Variational Autoencoder (VAE). The encoder model is a symmetric and parallel set of two RNNs that individually process the sketch drawings in forward and reverse order, respectively. The hidden state produced by each encoder model is then concatenated into a single hidden state $h$.

The concatenated hidden state $h$ is then projected into two vectors $\mu$ and $\hat{\sigma}$ each of size $N_{z}$ using a fully connected layer. $\hat{\sigma}$ is then converted into a non-negative standard deviation parameter $\sigma$ using an exponential operator. These two parameters $\mu$ and $\sigma$ are then used along with an IID Gaussian vector distributed as $\mathcal{N}(0, I)$ of size $N_{z}$ to construct a random vector $z \in ℝ^{N_{z}}$, similar to the method used for VAE: \begin{align} \mu = W_{\mu}h + b_{mu}\textrm{, }\hat{\sigma} = W_{\sigma}h + b_{\sigma}\textrm{, }\sigma = exp\bigg{(}\frac{\hat{\sigma}}{2}\bigg{)}\textrm{, }z = \mu + \sigma \odot \mathcal{N}(0,I) \end{align}

The decoder model is another RNN that samples output sketches from the latent vector $z$. The initial hidden states of each recurrent neuron are determined using $[h_{0}, c_{0}] = tanh(W_{z}z + b_{z})$. Each step of the decoder RNN accepts the previous point $S_{i-1}$ and the latent vector $z$ as concatenated input. The initial point given is the origin point with pen state down. The output at each step are the parameters for a probability distribution of the next point $S_{i}$. Outputs $\Delta x$ and $\Delta y$ are modelled using a Gaussian Mixture Model (GMM) with M normal distributions and output pen states $(q_{1}, q_{2}, q_{3})$ modelled as a categorical distribution with one-hot encoding. \begin{align} P(\Delta x, \Delta y) = \sum_{j=1}^{M}\Pi_{j}\mathcal{N}(\Delta x, \Delta y | \mu_{x, j}, \mu_{y, j}, \sigma_{x, j}, \sigma_{y, j}, \rho_{xy, j})\textrm{, where }\sum_{j=1}^{M}\Pi_{j} = 1 \end{align}

For each of the M distributions in the GMM, parameters $\mu$ and $\sigma$ are output for both the x and y locations signifying the mean location of the next point and the standard deviation, respectively. Also output from each model is parameter $\rho_{xy}$ signifying correlation of each bivariate normal distribution. An additional vector $\Pi$ is output giving the mixture weights for the GMM. The output $S_{i}$ is determined from each of the mixture models using softmax sampling from these distributions.

One of the key difficulties in training this model is the highly imbalanced class distribution of pen states. In particular, the state that signifies a drawing is complete will only appear one time per each sketch and is difficult to incorporate into the model. In order to have the model stop drawing, the authors introduce a hyperparameter that limits the number of points per drawing to being no more than $N_{max}$, after which all output states form the model are set to (0, 0, 0, 0, 1) to force the drawing to stop.

To sample from the model, the parameters required by the GMM and categorical distributions are generated at each time step and the model is sampled until a “stop drawing” state appears or the time state reaches time $N_{max}$. The authors also introduce a “temperature” parameter $\tau$ that controls the randomness of the drawings by modifying the pen states, model standard deviations, and mixture weights as follows:

\begin{align} \hat{q}_{k} \rightarrow \frac{\hat{q}_{k}}{\tau}\textrm{, }\hat{\Pi}_{k} \rightarrow \frac{\hat{\Pi}_{k}}{\tau}\textrm{, }\sigma^{2}_{x} \rightarrow \sigma^{2}_{x}\tau\textrm{, }\sigma^{2}_{y} \rightarrow \sigma^{2}_{y}\tau \end{align}

This parameter $\tau$ lies in the range (0, 1]. As the parameter approaches 0, the model becomes more deterministic and always produces the point locations with the maximum likelihood for a given timestep.

### Unconditional Generation

The authors also explored unconditional generation of sketch drawings by only training the decoder RNN module. To do this, the initial hidden states of the RNN were set to 0, and only vectors from the drawing input are used as input without any conditional latent variable $z$. Different sketches are sampled from the network by only varying the temperature parameter <math\tau[/itex] between 0.2 and 0.9

### Training

The training procedure follows the same approach as training for VAE and uses a loss function that consists of the sum of Reconstruction Loss $L_{R}$ and KL Divergence Loss $L_{KL}$. The reconstruction loss term is composed of two terms; $L_{s}$, which tries to maximize the log-likelihood of the generated probability distribution explaining the training data $S$ and $L_{p}$ which is the log loss of the pen state terms. \begin{align} L_{s} = -\frac{1}{N_{max}}\sum_{i=1}^{N_{S}}log\bigg{(}\sum_{j=1}^{M}\Pi_{j,i}\mathcal{N}(\Delta x_{i},\Delta y_{i} | \mu_{x,j,i},\mu_{y,j,i},\sigma_{x,j,i},\sigma_{y,j,i},\rho_{xy,j,i})\bigg{)} \end{align} \begin{align} L_{p} = -\frac{1}{N_{max}}\sum_{i=1}^{N_{max}} \sum_{k=1}^{3}p_{k,i}log(q_{k,i}) \end{align} \begin{align} L_{R} = L_{s} + L{p} \end{align}

The KL divergence loss $L_{KL}$ measures the difference between the latent vector $z$ and an IID Gaussian distribution with 0 mean and unit variance. This term, normalized by the number of dimensions $N_{z}$ is calculated as: \begin{align} L_{KL} = -\frac{1}{2N_{z}}\big{(}1 + \hat{\sigma} - \mu^{2} – exp(\hat{\sigma})\big{)} \end{align}

The loss for the entire model is thus the weighted sum: \begin{align} Loss = L_{R} + w_{KL}L_{KL} \end{align}

The value of the weight parameter $w_{KL}$ has the effect that as $w_{KL} \rightarrow 0$, there is a loss in ability to enforce a prior over the latent space and the model assumes the form of a pure autoencoder.

# Source

Ha, D., & Eck, D. A neural representation of sketch drawings. In Proc. International Conference on Learning Representations (2018).