Hi, I have question here.
The "test" mode are fine ,the results are nice, but the "refer test" mode results are very blurry.I saved the cycle image(used for cycle loss) and found that these images are blurry too.
I set the cycleweight to 5 but then all the results are bad.
What should I do to improve the quality of the cycle image or the refer test mode results?Can you help me to solve this out?
THANK YOU !!!
Hi,
Thanks for the code, I am just wondering what happened if I use Distributed training with TensorFlow in your project since I am having 2 GPUs. I see in your code that during the training phase, you split your image data into several GPUs and then feed them in a for loop which you iterate through each GPU.
I am just wondering is this the optimal way to do this since I am not really familiar with Tensorflow so by looking a bit I found this Distributed training with TensorFlow .
Therefore, this is not an issue but I guess your training loop can be improved by applying this.
Hi , I was reimplementing this work but using Pytorch ,and result is very bad.
thank you for your sharing ,and I have some questions :
what is your training parameters , like "gan_type":
you default gan_type is gan but this type not using gradient panalty ,and author said using R1 GP, so I guess you used gan_type=dragan ? right?
How do you get target_domain label y~ and source_domain label y ?
in my opinion, the dataloader about this model is more likely classification training.
data format is [X_img, domain_label ]
I think domain_label is original domain of x , so it is source_domain label y. But what is target_domain label y~ and how to get it?? just random ??
Hi, thanks for your sharing!
I used your stargan v2 to train celebA, gantype is "hinge",batchsize is "2",sn is "true". I trained about tow days , the training results are bad and having large green area on face. I doubt that should i continue training or change config?