Conditional Image Synthesis with Auxiliary Classifier GANs: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 17: Line 17:


== References ==
== References ==
[1] Odena, A., Olah, C., & Shlens, J. (2016). Conditional image synthesis with auxiliary classifier gans. arXiv preprint [http://proceedings.mlr.press/v70/odena17a.html arXiv:1610.09585].
1. Odena, A., Olah, C., & Shlens, J. (2016). Conditional image synthesis with auxiliary classifier gans. arXiv preprint [http://proceedings.mlr.press/v70/odena17a.html arXiv:1610.09585].

Revision as of 23:08, 2 November 2017

Abstract: "In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128×128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128×128 samples are more than twice as discriminable as artificially resized 32×32 samples. In addition, 84.7% of the classes have samples exhibiting diversity comparable to real ImageNet data." Odena et al., 2016

Introduction

Motivation

Previous Work

Model

Results

Critique

References

1. Odena, A., Olah, C., & Shlens, J. (2016). Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585.