Universal Style Transfer via Feature Transforms: Difference between revisions
No edit summary |
No edit summary |
||
Line 43: | Line 43: | ||
==Image Reconstruction== | ==Image Reconstruction== | ||
[[File:image_resconstruction.png|thumb|150px|right|alt=Training a single decoder.|Training a single decoder.]] | |||
An auto-encoder network is used to first encode an input image into a set of feature maps, and then decode it back to an image. The encoder network used is VGG-19. This network is reponsible for obtaining feature maps (similar to Gatys et al.). The output of each of the first five layers is then fed into a corresponding decoder network, which is a mirrored version of VGG-19. Each decoder network then decodes the feature maps of the $l$th layer producing an output image. A mechanism for transferring style will be implemented by manipulating the feature maps between the encoder and decoder networks. | An auto-encoder network is used to first encode an input image into a set of feature maps, and then decode it back to an image. The encoder network used is VGG-19. This network is reponsible for obtaining feature maps (similar to Gatys et al.). The output of each of the first five layers is then fed into a corresponding decoder network, which is a mirrored version of VGG-19. Each decoder network then decodes the feature maps of the $l$th layer producing an output image. A mechanism for transferring style will be implemented by manipulating the feature maps between the encoder and decoder networks. | ||
Line 55: | Line 56: | ||
</center> | </center> | ||
where $I_{input}$ and $I_{output}$ are the input and output images of the auto-encoder. $\Phi$ is the VGG encoder. The first term of the loss is the pixel reconstruction loss, while the second term is feature loss. Recall from "Related Work" that the feature maps correspond to the content of the image. Therefore the second term can also be seen as penalizing for content differences that arise due the auto-encoder network. | where $I_{input}$ and $I_{output}$ are the input and output images of the auto-encoder. $\Phi$ is the VGG encoder. The first term of the loss is the pixel reconstruction loss, while the second term is feature loss. Recall from "Related Work" that the feature maps correspond to the content of the image. Therefore the second term can also be seen as penalizing for content differences that arise due the auto-encoder network. The network was trained using the Microsoft COCO dataset. | ||
==Whitening Transform== | ==Whitening Transform== |
Revision as of 01:27, 24 October 2017
Under construction!
Introduction
When viewing an image, whether it is a photograph or a painting, two types of mutually exclusive data are present. First, there is the content of the image, such as a person in a portrait. However, the content does not uniquely define the image. Consider a case where multiple artists paint a portrait of an identical subject, the results would vary despite the content being invariant. The cause of the variance is rooted in the style of each particular artist. Therefore, style transfer between two images results in the content being unaffected but just the style being copied. Typically one image is termed the content/reference image, whose style is discarded and the other image is called the style image, whose style, but not content is copied.
Deep learning techniques have been shown to be effective methods for implementing style transfer. Previous methods have been successful but with several key limitations. Either they are fast, but have very few styles that can be transferred or they can handle arbitrary styles but are no longer efficient. The presented paper establishes a compromise between these two extremes by using only whitening and colouring transforms to transfer a particular style. No training of the underlying deep network is required per style.
Related Work
Gatys et al. developed a new method for generating textures from sample images in 2015 [1] and extended their approach to style transfer by 2016 [2]. They proposed the use of a pre-trained convolutional neural network (CNN) to separate content and style of input images. Having proven successful, a number of improvements quickly developed, reducing computational time, increasing the diversity of transferrable styes, and improving the quality of the results. Central to these approaches and of the present paper is the use of a CNN.
How Content and Style are Extracted using CNNs
A CNN was chosen due to its ability to extract high level features from images. These features can be interpreted in two ways. Within layer [math]\displaystyle{ l }[/math] there are [math]\displaystyle{ N_l }[/math] features maps of size [math]\displaystyle{ M_l }[/math]. With a particular input image, the feature maps is given by [math]\displaystyle{ F_{i,j}^l }[/math] where [math]\displaystyle{ i }[/math] and [math]\displaystyle{ j }[/math] locate the output within the layer. Starting with a white noise image and an reference (content) image, the features can be transferred by minimizing
[math]\displaystyle{ \mathcal{L}_{content} = \frac{1}{2} \sum_{i,j} \left( F_{i,j}^l - P_{i,j}^l \right)^2 }[/math]
where [math]\displaystyle{ P_{i,j} }[/math] denotes the feature map output caused by the white noise image. Therefore this loss function preserves the content of the reference image. The style is described using a Gram matrix given by
[math]\displaystyle{ G_{i,j}^l = \sum_k F_{i,k}^l F_{j,k}^l }[/math]
and the loss function that describes a difference in style between two images is
[math]\displaystyle{ \mathcal{L}_{style} = \frac{1}{4 N_l^2 M_l^2} \sum_{i,j} \left(G_{i,j}^l - A_{i,j}^l \right) }[/math]
where [math]\displaystyle{ A_{i,j}^l }[/math] and [math]\displaystyle{ G_{i,j}^l }[/math] are the Gram matrices of the generated image and style image respectively. Therefore three images are required, a style image, a content image and an initial white noise image. Iterative optimization is then used to add content from one image to the white noise image, and style from other. An additional parameter is used to balance the ratio of these loss functions.
The 19-layer ImageNet trained VGG network was chosen by Gatys et al. VGG-19 is still commonly used in more recent works as will be shown in the presented paper, although training datasets vary. Such CNNs are typically used in classification problems by finalizing their output through a series of full connected layers. For content and style extraction it is the convolutional layers that are required. The method of Gatys et al. is style independent, since the CNN does not need to be trained for each style image. However the process of iterative optimization to generate the output image is inefficient.
Other Methods
Other methods avoid the inefficiency of iterative optimization by training a network/networks on a set of styles. The network then directly transfers the style from the style image to the content image without solving the interative optimization problem. V. Dumoulin et al. trained a single network on $N$ styles [3]. This improved upon previous work where a network was required per style [4]. The stylized output image was generated by simply running a feedforward pass of the network on the content image. While efficiency is high, the method is no longer able to apply an arbitrary style without retraining.
Methodology
Li et al. have proposed a novel method for generating the stylized image. A CNN is still used as in Gatys et al. to extract content and style. However, the stylized image is not generated through iterative optimization or a feed-forward pass as required by previous methods. Instead, whitening and colour transforms are used.
Image Reconstruction
An auto-encoder network is used to first encode an input image into a set of feature maps, and then decode it back to an image. The encoder network used is VGG-19. This network is reponsible for obtaining feature maps (similar to Gatys et al.). The output of each of the first five layers is then fed into a corresponding decoder network, which is a mirrored version of VGG-19. Each decoder network then decodes the feature maps of the $l$th layer producing an output image. A mechanism for transferring style will be implemented by manipulating the feature maps between the encoder and decoder networks.
First, the auto-encoder network needs to be trained. The following loss function is used
[math]\displaystyle{ \mathcal{L} = || I_{output} - I_{input} ||_2^2 + \lambda || \Phi(I_{output}) - \Phi(I_{input})||_2^2 }[/math]
where $I_{input}$ and $I_{output}$ are the input and output images of the auto-encoder. $\Phi$ is the VGG encoder. The first term of the loss is the pixel reconstruction loss, while the second term is feature loss. Recall from "Related Work" that the feature maps correspond to the content of the image. Therefore the second term can also be seen as penalizing for content differences that arise due the auto-encoder network. The network was trained using the Microsoft COCO dataset.
Whitening Transform
Whitening first requires that the covariance of the data is a diagonal matrix. This is done by solving for the covariance matrix's eigenvalues and eigenvector matrices. Whitening then forces the diagonal elements of the eigenvalue matrix to be the same. This is achieved for a feature map from VGG through the following steps.
- The feature map $f_c$ is extracted from a layer of the encoder network. This is the data to be whitened.
- $f_c$ is centered by subtracting its mean vector $m_c$.
- The whitened feature map is then given by $\hat{f}_c = E_c D_c^{-1/2} E_c^T f_c$. $E_c$ is the matrix of orthogonal eiginvectors, and $D_c$ is a diagonal matrix of eigenvalues. If interested, the derivation of this equation for whitening can be seen in [5].
Li et al. found that whitening removed styles from the image.
References
[1] L. A. Gatys, A. S. Ecker, and M. Bethge. Texture synthesis using convolutional neural networks. In NIPS, 2015.
[2] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016.
[3] V. Dumoulin, J. Shlens, and M. Kudlur. A learned representation for artistic style. In ICLR, 2017.
[4] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016
[5] R. Picard. MAS 622J/1.126J: Pattern Recognition and Analysis, Lecture 4. http://courses.media.mit.edu/2010fall/mas622j/whiten.pdf