Git Product home page Git Product logo

mafda / generative_adversarial_networks_101 Goto Github PK

View Code? Open in Web Editor NEW
202.0 8.0 79.0 16.52 MB

Keras implementations of Generative Adversarial Networks. GANs, DCGAN, CGAN, CCGAN, WGAN and LSGAN models with MNIST and CIFAR-10 datasets.

License: MIT License

Jupyter Notebook 100.00%
gan mnist dcgan tensorflow keras generative-adversarial-network cgan ccgan cifar10 cifar-10 cgans ccgans gans wgan lsgan jupyter-notebook lsgans mnist-dataset generative-adversarial-networks conda-environment

generative_adversarial_networks_101's Introduction

Generative Adversarial Networks - GANs

This repository presents the basic notions that involve the concept of Generative Adversarial Networks.

"...the most interesting idea in the last 10 years in ML". Yann LeCun

Definition

Generative Adversarial Networks or GANs is a framework proposed by Ian Goodfellow, Yoshua Bengio and others in 2014.

GANs are composed of two models, represented by artificial neural network:

  • The first model is called a Generator and it aims to generate new data similar to the expected one.
  • The second model is named the Discriminator and it aims to recognize if an input data is ‘real’ — belongs to the original dataset — or if it is ‘fake’ — generated by a forger.

Read more in this post GANs — Generative Adversarial Networks 101.

Configure environment

  • Create the conda environment
(base)$: conda env create -f environment.yml
  • Activate the environment
(base)$: conda activate gans_101
  • Run!
(gans_101)$: python -m jupyter notebook

Models

Definition and training some models with MNIST and CIFAR-10 datasets.

MNIST dataset

CIFAR-10 dataset

Results

Training models with Keras - TensorFlow.

MNIST dataset

Generative Adversarial Networks - GANs

A GANs implementation using fully connected layers. Notebook

Epoch 00 Epoch 100 Loss
GAN with MNIST GAN with MNIST GAN with MNIST

Deep Convolutional Generative Adversarial Networks - DCGANs

A DCGANs implementation using the transposed convolution technique. Notebook

Epoch 00 Epoch 100 Loss
GAN with MNIST GAN with MNIST GAN with MNIST

Conditional Generative Adversarial Nets - CGANs

A CGANs implementation using fully connected layers and embedding layers. Notebook

Epoch 00 Epoch 100 Loss
CGAN with MNIST CGAN with MNIST CGAN with MNIST

Context-Conditional Generative Adversarial Networks - CCGANs

A CCGANs implementation using U-Net and convolutional neural network. Notebook

Epoch 00 Epoch 100 Loss
CGAN with MNIST CGAN with MNIST CGAN with MNIST

Wasserstein Generative Adversarial Networks - WGANs

A WGANs implementation using convolutional neural network. Notebook

Epoch 00 Epoch 100 Loss
WGAN with MNIST WGAN with MNIST WGAN with MNIST

Least Squares General Adversarial Networks - LSGANs

A LSGANs implementation using using fully connected layers. Notebook

Epoch 00 Epoch 100 Loss
LSGAN with MNIST LSGAN with MNIST LSGAN with MNIST

CIFAR-10 dataset

Deep Convolutional Generative Adversarial Networks - DCGANs

A DCGANs implementation using the transposed convolution technique. Notebook

Epoch 00 Epoch 100 Loss
DCGAN with CIFAR-10 DCGAN with CIFAR-10 DCGAN with CIFAR-10

Conditional Generative Adversarial Networks - CGANs

A CGANs implementation using the transposed convolution and convolution neural network, and concatenate layers. Notebook

Epoch 00 Epoch 100 Loss
CGAN with CIFAR-10 CGAN with CIFAR-10 CGAN with CIFAR-10

References


made with 💙 by mafda

generative_adversarial_networks_101's People

Contributors

mafda avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

generative_adversarial_networks_101's Issues

why random labels are used for training of d_g?

In 03_CGAN_MNIST
(1) d_loss_real = discriminator.train_on_batch(x=[X_batch, real_labels], y=real * (1 - smooth))
(2) d_loss_fake = discriminator.train_on_batch(x=[X_fake, random_labels], y=fake)
(3) d_g_loss_batch = d_g.train_on_batch(x=[z, random_labels], y=real)

To train the discriminator, you first train for X_batch with real_labels then X_fake with random_labels. I think this should be real_labels instead of random_labels in equation 2.

To train the generator, in equation (3), why you take random_lables for training the d_g instead of real_labels?
Thank you.

why we always use img_real to train generator weights in ccGAN?

in ccGAN, we feed the img_real and img_fake to train the weights of discriminator. After that, we should train the generator weights. I think we should use the [masked_imgs, real] as the input and output pairs to train the generator, but you use the img_real to train that. When the input is img_real, how could generator learning anything to generate a new pic? Could you explain the reason? thanks.

for e in range(epochs + 1):
    for i in range(len(X_train) // batch_size):
        
        # Train Discriminator weights
        discriminator.trainable = True
        
        # Real samples
        img_real = X_train[i*batch_size:(i+1)*batch_size]
        real_labels = y_train[i*batch_size:(i+1)*batch_size]
        
        d_loss_real = discriminator.train_on_batch(x=img_real, y=[real, real_labels])
        
        # Fake Samples
        masked_imgs = mask_randomly(img_real)
        gen_imgs = generator.predict(masked_imgs)
        
        d_loss_fake = discriminator.train_on_batch(x=gen_imgs, y=[fake, fake_labels])
         
        # Discriminator loss
        d_loss_batch = 0.5 * (d_loss_real[0] + d_loss_fake[0])
        
        # Train Generator weights
        discriminator.trainable = False

        d_g_loss_batch = d_g.train_on_batch(x=img_real, y=real)     # =============> d_g_loss_batch = d_g.train_on_batch(x=masked_imgs, y=real) 

DCGAN and CGAN suffer from mode collapse

After training for 200 epochs with the CIFAR 10 dataset, I found that the DCGAN and CGAN generated images do not have enough variety. Most likely a mode collapse happening.

LSGAN loss function

Hi there,

I have a question related to your awesomework.
Base on LSGAN paper, the loss function may looks like this on tensorflow version
D_loss = 0.5 * (tf.reduce_mean((D_real - 1)2) + tf.reduce_mean(D_fake2))
G_loss = 0.5 * tf.reduce_mean((D_fake - 1)**2)

I check your loss function which is MSE. Where do you define the a, b, c for LSGAN.
d_g.compile(optimizer=optimizer, loss='mse', metrics=['binary_accuracy'])

Thank you,

why we use discriminator.trainable = False before d_g.train_on_batch(x=z, y=real) in 01_GAN_MNIST_original?

for e in range(epochs):
    for i in range(len(X_train) // batch_size):

        # Train Discriminator weights
        discriminator.trainable = True

        # Real samples
        X_batch = X_train[i*batch_size:(i+1)*batch_size]
        d_loss_real = discriminator.train_on_batch(x=X_batch, y=real * (1 - smooth))

        # Fake Samples
        z = np.random.normal(loc=0, scale=1, size=(batch_size, latent_dim))
        X_fake = generator.predict_on_batch(z)
        d_loss_fake = discriminator.train_on_batch(x=X_fake, y=fake)

        # Discriminator loss
        d_loss_batch = 0.5 * (d_loss_real[0] + d_loss_fake[0])

        # Train Generator weights
        discriminator.trainable = False   <<============
        d_g_loss_batch = d_g.train_on_batch(x=z, y=real)

Hello, I know the process of GAN but I am a little bit confused with the use of discriminator.trainable = False (see <<============) because in d_g, the discriminator is already false so can we remove this from here (<<============) or it has some other meaning here and I am not able to get it?

Please help me out. Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.