Overview
Generative Adversarial Nets (GANs):
Conditional Generative Adversarial Nets (CGANs):
- Unconditioned Model Limitation: No control over modes of the data being generated.
- Conditioning: Directs data generation based on additional information y (e.g., class labels, parts of data for inpainting, data from different modalities).
- Demonstration: Constructing CGAN and empirical results on MNIST and MIR Flickr 25,000 datasets.
Related Work
Multi-modal Learning for Image Labeling:
- Supervised neural networks have been successful but struggle with extremely large numbers of predicted output categories and probabilistic one-to-many mappings.
- Leveraging additional information from other modalities (e.g., natural language corpora) can improve classification performance.
- Conditional probabilistic generative models can handle one-to-many mappings by using a conditional predictive distribution.
Conditional Adversarial Nets
Generative Adversarial Nets:
- Structure:
- Generator (G): Maps prior noise distribution pz(z) to data space as G(z;θg).
- Discriminator (D): Outputs the probability that x came from training data rather than G.
- Training: Both G and D are trained simultaneously in a two-player min-max game

Conditional Adversarial Nets (CGANs):