1. Generative Adversarial Networks (GAN)
Details:
- Concept: GANs involve two neural networks, a generator and a discriminator, that are trained simultaneously. The generator tries to create data that mimics real data, while the discriminator tries to distinguish between real and fake data.
Architecture:
- Generator (G):
- Input: Noise vector z sampled from a predefined distribution (e.g., Gaussian).
- Layers: Fully connected or convolutional layers.
- Output: Synthetic data point.
- Discriminator (D):
- Input: Real or synthetic data.
- Layers: Fully connected or convolutional layers.
- Output: Scalar value indicating the probability that the input is real.
Modifications:
- Various architectural enhancements like Deep Convolutional GANs (DCGANs), Wasserstein GANs (WGANs), etc., to improve training stability and performance.
Use Cases:
- Image synthesis, text-to-image generation, image inpainting, and various scientific applications.
Conditional GAN (cGAN)
Details:
- Concept: cGANs are an extension of GANs where both the generator and discriminator are conditioned on some additional information.
Architecture:
- Generator (G):
- Input: Noise vector z concatenated with a class label c.
- Layers: Similar to GAN but conditioned on c.
- Output: Synthetic data conditioned on c.
- Discriminator (D):
- Input: Real or synthetic data concatenated with class label c.
- Layers: Similar to GAN but conditioned on c.
- Output: Scalar value indicating the probability that the input is real.
Modifications:
- Conditional information can be any additional data like class labels, text descriptions, etc.