Git Product home page Git Product logo

relativisticgan's People

Contributors

alexiajm avatar feepingcreature avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

relativisticgan's Issues

dataset

Could you tell me how to change the dataset?the dataset of CelebA is OK? Thank you

Is it normal to set reqires_grad=False for D before update G?

Hi!
First, thank you for sharing the code. I appreciate it.
I have a question.
When you update the G, the gradient of D was not computed?
Is there a specific reason you set requires_grad=False for D before update the G?
The code is applied to all types of GAN including original GAN where the original implementation does not.

is the Loss of RaGAN and RaLSGAN wrong?

take RaGAN for example, according to Algorithm 2 of the paper, the loss of RaGAN should be:

errD = (BCE_stable(y_pred - torch.mean(y_pred_fake), y) + BCE_stable(y_pred_fake - torch.mean(y_pred), y2))/2
errG = (BCE_stable(y_pred - torch.mean(y_pred_fake), y2) + BCE_stable(y_pred_fake - torch.mean(y_pred), y))/2

while yours be:

errD = (BCE_stable(y_pred - torch.mean(y_pred_fake), y) + BCE_stable(torch.mean(y_pred_fake) - y_pred, y2))/2
errG = (BCE_stable(y_pred - torch.mean(y_pred_fake), y2) + BCE_stable(torch.mean(y_pred_fake) - y_pred, y))/2

why?

Training curve, and hybrid models

Hi, I have read your paper. It was a really interesting idea!

I've been trying to implement you paper in TensorFlow, and I wonder if my implementation was right. I'm familiar with WGAN-GP, so I tried RSGAN-GP first. I looked at the training curve of the loss of the discriminator, and found it fluctuating around 0.5. I wonder if this is a normal phenomena?

Also, I wonder if the idea of RGAN is extendable to hybrid models, ex. VAEGAN or collaborating with other MSE-like loss function?

add Relativism to cycleGAN

Hey @AlexiaJM ,

Good job to your work. I am trying to add Relativism to cycleGAN but I'm little confused about the way to add Relativism to this GAN, since cycleGAN has 2 Generator and 2 Discriminator.

Generators loss in cyclegan calculated as follow:

        # GAN loss
        fake_B = G_AB(real_A)
        loss_GAN_AB = criterion_GAN(D_B(fake_B), valid)
        fake_A = G_BA(real_B)
        loss_GAN_BA = criterion_GAN(D_A(fake_A), valid)

        loss_GAN = (loss_GAN_AB + loss_GAN_BA) / 2  

Discriminator loss:

        #  Train Discriminator A
        optimizer_D_A.zero_grad()
        # Real loss
        loss_real = criterion_GAN(D_A(real_A), valid)
        # Fake loss (on batch of previously generated samples)
        fake_A_ = fake_A_buffer.push_and_pop(fake_A)
        loss_fake = criterion_GAN(D_A(fake_A_.detach()), fake)
        # Total loss
        loss_D_A = (loss_real + loss_fake) / 2
        loss_D_A.backward()
        optimizer_D_A.step()

        #  Train Discriminator B
        optimizer_D_B.zero_grad()
        # Real loss
        loss_real = criterion_GAN(D_B(real_B), valid)
        # Fake loss (on batch of previously generated samples)
        fake_B_ = fake_B_buffer.push_and_pop(fake_B)
        loss_fake = criterion_GAN(D_B(fake_B_.detach()), fake)
        # Total loss
        loss_D_B = (loss_real + loss_fake) / 2
        loss_D_B.backward()
        optimizer_D_B.step()

        loss_D = (loss_D_A + loss_D_B) / 2

and criterion_GAN is MSELoss().

I modified the code and add relativisim but I am not sure I did it correctly.
Generators loss:

   loss_GAN_AB = (torch.mean((D_B(real_A) - torch.mean(D_B(fake_B)) + valid) ** 2) +
                  torch.mean((D_B(fake_B) - torch.mean(D_B(real_A)) - valid) ** 2)) / 2

   loss_GAN_BA = (torch.mean((D_A(real_B) - torch.mean(D_A(fake_A)) + valid) ** 2) +
                  torch.mean((D_A(fake_A) - torch.mean(D_A(real_B)) - valid) ** 2)) / 2

   loss_GAN = (loss_GAN_AB + loss_GAN_BA) / 2

Discriminators loss:

                optimizer_D_A.zero_grad()
                fake_A_ = fake_A_buffer.push_and_pop(fake_A)

                errD_A = (torch.mean((D_A(real_A) - torch.mean(D_A(fake_A_.detach())) - valid) ** 2) +
                        torch.mean((D_A(fake_A_.detach()) - torch.mean(D_A(real_A)) + valid) **2)) / 2

                errD_A.backward()
                optimizer_D_A.step()

                # Train Second Discriminator (B)
                optimizer_D_B.zero_grad()
                fake_B_ = fake_B_buffer.push_and_pop(fake_B)
                errD_B =(torch.mean((D_B(real_B) - torch.mean(D_B(fake_B_.detach())) - valid) ** 2) +
                        torch.mean((D_B(fake_B_.detach()) - torch.mean(D_B(real_B)) + valid) **2)) / 2
                errD_B.backward()
                optimizer_D_B.step()
                loss_D = (errD_A + errD_B) / 2

I would be appreciated if you can help me to figure out how can I add Relativism to cycleGAN.

Thanks in advance!

sigmoid activation in last layer of the generator?

Hi,

Thanks for this interesting work .. just one question regarding the following comments of the code snippet you have provided in the readme page:

No sigmoid activation in last layer of generator because BCEWithLogitsLoss() already adds it

No activation in generator

Is this true? or you mean the discriminator is what should include/exclude sigmoid activation?

Thanks

version

Could you tell me the version of Tensorflow and Pytorch? Thank you very much!

Should TensorBoard still be disabled?

Hi,
I'd noticed the disabled tensorboard logging code, stating it was incompatible with tensorflow. However, since the code doesn't seem to be using tensorflow, why not turn it back on? It seems to work fine when I try.

Gradient argument in the paper

I read your paper. It's really a good job. But the thing which I don't understand is the gradient of generator. Why is there $C(G(z))$ in the function. And I don't find the form of $J_{\theta}G(z)$ in the paper. Is that the derivative of $G(z)$ to $\theta$ ?
If you can help me, I will be very grateful.

Suggestion: Use ProGAN in your experiments too.

Hi @AlexiaJM; firstly, great job with your work. I am currently still reading your paper. Just upon scrolling to the FID comparison results, I didn't notice ProGAN (Progressive growing of GANs) in your experiments. The reason why I suggested ProGAN, is that I have recently worked on them and found them to be quite good and stable. Perhaps augmenting the RaLSGAN with Progressive growing, you could achieve an even better FID. This is just a suggestion though. Again great job ๐Ÿ‘.

Best regards,
akanimax

something wrong in SGAN

Everything seems great in RSGAN and RaSGAN, but when I test SGAN in CAT 64x64, it can not generator anything with lr=0.0002. Shall I change it ?

Real/Fake Accuracy

If I want to compute the real/fake accuracy of the discriminator for a batch using this method, do I need to do:

real_accuracy = sigmoid(real_logits - mean(fake_logits)) >= 0.5
fake_accuracy = sigmoid(fake_logits - mean(real_logits)) < 0.5

e.g. the same input that I would give to the BCE with logits loss goes to the sigmoid

Or do I do the standard GAN thing and compute

real_accuracy = sigmoid(real_logits) >= 0.5
fake_accuracy = sigmoid(fake_logits) < 0.5

Question

Hi
Very nice work. Thanks for sharing !

I have some questions

  1. when make the 256x256 images, you used PACGAN2 ! In your paper, you concat x1 & x2
    x1 and x2 are same images ? or different image ?

  2. In RelativisticGAN, doesn't see mode collapse ?

  3. What is the performance of RLSGAN? I only see the RaLSGAN in your paper.

  4. Is batch_size not important for RelativisticGAN ?

Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.