Git Product home page Git Product logo

fq-gan's Introduction

FQ-GAN

Recent Update

  • May 22, 2020 Releasing the pre-trained FQ-BigGAN/BigGAN at resolution 64x64 and their training logs at the link (10.34G):

  • May 22, 2020 Selfie2Anime Demo is released. Try it out.

  • Colab file for training and testing. Put it intoFQ-GAN/FQ-U-GAT-IT and follow the training/testing instruction.

  • Selfie2Anime pretrained models are available now!! Halfway checkpoint and Final checkpoint.

  • Photo2Portrait pretrained model is released!


This repository contains source code to reproduce the results presented in the paper:

Feature Quantization Improves GAN Training, ICML 2020
Yang Zhao*, Chunyuan Li*, Ping Yu, Jianfeng Gao, Changyou Chen

Contents

  1. FQ-BigGAN
  2. FQ-U-GAT-IT
  3. FQ-StyleGAN

FQ-BigGAN

This code is based on PyTorchGAN. Here we will give more details of the code usage. You will need python 3.x, pytorch 1.x, tqdm ,h5py

Prepare datasets

  1. CIFAR-10 or CIFAR-100 (change C10 to C100 to prepare CIFAR-100)
python make_hdf5.py --dataset C10 --batch_size 256 --data_root data
python calculate_inception_moments.py --dataset C10 --data_root data --batch_size 128
  1. ImageNet, first you need to manually download ImageNet and put all image class folders into ./data/ImageNet, then execute the following command to prepare ImageNet (128×128)
python make_hdf5.py --dataset I128 --batch_size 256 --data_root data
python calculate_inception_moments.py --dataset I128_hdf5 --data_root data --batch_size 128

Training

We have four bash scripts in FQ-BigGAN/scripts to train CIFAR-10, CIFAR-100, ImageNet (64×64) and ImageNet (128×128), respectively. For example, to train CIFAR-100, you may simply run

sh scripts/launch_C100.sh

To modify the FQ hyper-parameters, we provide the following options in each script as arguments:

  1. --discrete_layer: it specifies which layers you want quantization to be added, i.e. 0123
  2. --commitment : it is the quantization loss coefficient, default=1.0
  3. --dict_size: the size of the EMA dictionary, default=8, meaning there are 2^8 keys in the dictionary.
  4. --dict_decay: the momentum when learning the dictionary, default=0.8.

Experiment results

Learning curves on CIFAR-100.

FID score comparison with BigGAN on ImageNet

Model 64×64 128×128
BigGAN 10.55 14.88
FQ-BigGAN 9.67 13.77

FQ-U-GAT-IT

This experiment is based on the official codebase U-GAT-IT. Here we plan to give more details of the dataset preparation and code usage. You will need python 3.6.x, tensorflow-gpu-1.14.0, opencv-python, tensorboardX

Prepare datasets

We use selfie2anime, cat2dog, horse2zebra, photo2portrait, vangogh2photo.

  1. selfie2anime: go to U-GAT-IT to download the dataset and unzip it to ./dataset.
  2. cat2dog and photo2portrait: here we provide a bash script adapted from DRIT to download the two datasets.
cd FQ-U-GAT-IT/dataset && sh download_dataset_1.sh [cat2dog, portrait]
  1. horse2zebra and vangogh2photo: here we provide a bash script adapted from CycleGAN to download the two datasets.
cd FQ-U-GAT-IT && bash download_dataset_2.sh [horse2zebra, vangogh2photo]

Training

python main.py --phase train --dataset [type=str, selfie2anime/portrait/cat2dog/horse2zebra/vangogh2photo] --quant [type=bool, True/False] --commitment_cost [type=float, default=2.0] --quantization_layer [type=str, i.e. 123] --decay [type=float, default=0.85]

By default, the training procedure will output checkpoints and intermediate translations from (testA, testB) to checkpoints (checkpoints_quant) and results (results_quant) respectively.

Testing

python main.py --phase test --test_train False --dataset [type=str, selfie2anime/portrait/cat2dog/horse2zebra/vangogh2photo] --quant [type=bool, True/False] --commitment_cost [type=float, default=2.0] --quantization_layer [type=str, i.e. 123] --decay [type=float, default=0.85]

If the model is freshly loaded from what I have shared, remember to put them into checkpoint_quant/UGATIT_q_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing_123_2.0_0.85 by default and modify the file checkpoint accordingly. This structure is inherited from the official U-GAT-IT. Please feel free to modify it for convinience.

Usage

├── FQ-GAN
   └── FQ-U-GAT-IT
       ├── dataset
           ├── selfie2anime
           ├── portrait
	   ├── vangogh2photo
	   ├── horse2zebra
           └── cat2dog
       ├── checkpoint_quant
           ├── UGATIT_q_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing_123_2.0_0.85
	       ├── checkpoint
	       ├── UGATIT.model-480000.data-00000-of-00001
	       ├── UGATIT.model-480000.index
	       ├── UGATIT.model-480000.meta
           ├── UGATIT_q_portrait_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing_123_2.0_0.85
           └── ...

If you choose the halfway pretrained model, contents in checkpoint should be

model_checkpoint_path: "UGATIT.model-480000"
all_model_checkpoint_paths: "UGATIT.model-480000"

FQ-StyleGAN

This experiment is based on the official codebase StyleGAN2. The original Flicker-Faces dataset includes multi-resolution data. You will need python 3.6.x, tensorflow-gpu 1.14.0, numpy

Prepare datasets

To obtain the FFHQ dataset, please refer to FFHQ repository and download the tfrecords dataset FFHQ-tfrecords into datasets/ffhq.

Training

python run_training.py --num-gpus=8 --data-dir=datasets --config=config-e --dataset=ffhq --mirror-augment=true --total-kimg 25000 --gamma=100 --D_type=1 --discrete_layer [type=string, default=45] --commitment_cost [type=float, default=0.25] --decay [type=float, default=0.8]
Model 32×32 64×64 128×128 1024×1024
StyleGAN 3.28 4.82 6.33 5.24
FQ-StyleGAN 3.01 4.36 5.98 4.89

Acknowledgements

We thank official open-source implementations of BigGAN, StyleGAN, StyleGAN2 and U-GAT-IT.

fq-gan's People

Contributors

chunyuanli avatar yangnaruto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fq-gan's Issues

Performance issues in FQ-BigGAN/TFHub/converter.py(P2)

Hello,I found a performance issue in the definition of dump_tfhub_to_hdf5 ,
FQ-BigGAN/TFHub/converter.py,
sess = tf.Session() was repeatedly called in for var in tf.global_variables(): and was not closed.
I think it will increase the efficiency and avoid out of memory if you close this session after using it.

Here are two files to support this issue,support1 and support2

Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

NaNs for C10

Hello,

I ran the C10 script (literally nothing modified) and I get NaNs:

10/195 (  4.62%) (TE/ETA: 0:05 / 1:49) Deleting singular value logs...al : +nan, D_loss_fake : +nan, Quant_loss : +nan, Perplexity : +1.000

Is this to be expected?

I've tried on 1 GPU, 2 GPUs, and 4 GPUs. All of them give NaNs.

Cant run launch_I128_bs256x4.sh

Hi, I could run launch_C10.sh but not launch_I128_bs256x4.sh. I get this following error, can you please help?

 sh scripts/mlaunch_I128_bs256x4.sh 
{'dataset': 'I128_hdf5', 'augment': False, 'num_workers': 8, 'pin_memory': True, 'shuffle': True, 'load_in_mem': False, 'use_multiepoch_sampler': True, 'dict_decay': 0.8, 'commitment': 15.0, 'discrete_layer': '0123', 'dict_size': 10, 'model': 'BigGAN', 'G_param': 'SN', 'D_param': 'SN', 'G_ch': 64, 'D_ch': 64, 'G_depth': 1, 'D_depth': 1, 'D_wide': True, 'G_shared': False, 'shared_dim': 0, 'dim_z': 120, 'z_var': 1.0, 'hier': True, 'cross_replica': False, 'mybn': False, 'G_nl': 'inplace_relu', 'D_nl': 'inplace_relu', 'G_attn': '64', 'D_attn': '64', 'norm_style': 'bn', 'seed': 0, 'G_init': 'ortho', 'D_init': 'ortho', 'skip_init': False, 'G_lr': 0.0001, 'D_lr': 0.0004, 'G_B1': 0.0, 'D_B1': 0.0, 'G_B2': 0.999, 'D_B2': 0.999, 'batch_size': 256, 'G_batch_size': 0, 'num_G_accumulations': 4, 'num_D_steps': 1, 'num_D_accumulations': 4, 'split_D': False, 'num_epochs': 100, 'parallel': True, 'G_fp16': False, 'D_fp16': False, 'D_mixed_precision': False, 'G_mixed_precision': False, 'accumulate_stats': False, 'num_standing_accumulations': 16, 'G_eval_mode': False, 'save_every': 1000, 'num_save_copies': 2, 'num_best_copies': 5, 'which_best': 'FID', 'no_fid': False, 'test_every': 1000, 'num_inception_images': 50000, 'hashname': False, 'base_root': '', 'data_root': '/filer/tmp2/an499_tmp2', 'weights_root': 'weights', 'logs_root': 'logs', 'samples_root': 'samples', 'pbar': 'mine', 'name_suffix': 'quant', 'experiment_name': '', 'config_from_name': False, 'ema': True, 'ema_decay': 0.9999, 'use_ema': True, 'ema_start': 20000, 'adam_eps': 1e-06, 'BN_eps': 1e-05, 'SN_eps': 1e-06, 'num_G_SVs': 1, 'num_D_SVs': 1, 'num_G_SV_itrs': 1, 'num_D_SV_itrs': 1, 'G_ortho': 0.0, 'D_ortho': 0.0, 'toggle_grads': True, 'which_train_fn': 'GAN', 'load_weights': '', 'resume': False, 'logstyle': '%3.3e', 'log_G_spectra': False, 'log_D_spectra': False, 'sv_log_interval': 10}
Experiment name is BigGAN_I128_hdf5_seed0_Gch64_Dch64_bs256_nDa4_nGa4_Gattn64_Dattn64_Commit15.00_Layer0123_Dicsz10_Dicdecay0.80_quant
Adding attention layer in G at resolution 64
Param count for Gs initialized parameters: 40247811
Adding attention layer in D at resolution 64
Param count for Ds initialized parameters: 39448257
Preparing EMA for G with decay of 0.9999
Adding attention layer in G at resolution 64
Initializing EMA parameters to be source parameters...
Generator(
  (activation): ReLU(inplace=True)
  (shared): identity()
  (linear): SNLinear(in_features=20, out_features=16384, bias=True)
  (blocks): ModuleList(
    (0): ModuleList(
      (0): GBlock(
        (activation): ReLU(inplace=True)
        (conv1): SNConv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv2): SNConv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv_sc): SNConv2d(1024, 1024, kernel_size=(1, 1), stride=(1, 1))
        (bn1): ccbn(
          out: 1024, in: 1000, cross_replica=False
          (gain): Embedding(1000, 1024)
          (bias): Embedding(1000, 1024)
        )
        (bn2): ccbn(
          out: 1024, in: 1000, cross_replica=False
          (gain): Embedding(1000, 1024)
          (bias): Embedding(1000, 1024)
        )
      )
    )
    (1): ModuleList(
      (0): GBlock(
        (activation): ReLU(inplace=True)
        (conv1): SNConv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv2): SNConv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv_sc): SNConv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1))
        (bn1): ccbn(
          out: 1024, in: 1000, cross_replica=False
          (gain): Embedding(1000, 1024)
          (bias): Embedding(1000, 1024)
        )
        (bn2): ccbn(
          out: 512, in: 1000, cross_replica=False
          (gain): Embedding(1000, 512)
          (bias): Embedding(1000, 512)
        )
      )
    )
    (2): ModuleList(
      (0): GBlock(
        (activation): ReLU(inplace=True)
        (conv1): SNConv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv2): SNConv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv_sc): SNConv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
        (bn1): ccbn(
          out: 512, in: 1000, cross_replica=False
          (gain): Embedding(1000, 512)
          (bias): Embedding(1000, 512)
        )
        (bn2): ccbn(
          out: 256, in: 1000, cross_replica=False
          (gain): Embedding(1000, 256)
          (bias): Embedding(1000, 256)
        )
      )
    )
    (3): ModuleList(
      (0): GBlock(
        (activation): ReLU(inplace=True)
        (conv1): SNConv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv2): SNConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv_sc): SNConv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))
        (bn1): ccbn(
          out: 256, in: 1000, cross_replica=False
          (gain): Embedding(1000, 256)
          (bias): Embedding(1000, 256)
        )
        (bn2): ccbn(
          out: 128, in: 1000, cross_replica=False
          (gain): Embedding(1000, 128)
          (bias): Embedding(1000, 128)
        )
      )
      (1): Attention(
        (theta): SNConv2d(128, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (phi): SNConv2d(128, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (g): SNConv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (o): SNConv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      )
    )
    (4): ModuleList(
      (0): GBlock(
        (activation): ReLU(inplace=True)
        (conv1): SNConv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv2): SNConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv_sc): SNConv2d(128, 64, kernel_size=(1, 1), stride=(1, 1))
        (bn1): ccbn(
          out: 128, in: 1000, cross_replica=False
          (gain): Embedding(1000, 128)
          (bias): Embedding(1000, 128)
        )
        (bn2): ccbn(
          out: 64, in: 1000, cross_replica=False
          (gain): Embedding(1000, 64)
          (bias): Embedding(1000, 64)
        )
      )
    )
  )
  (output_layer): Sequential(
    (0): bn()
    (1): ReLU(inplace=True)
    (2): SNConv2d(64, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  )
)
Discriminator(
  (activation): ReLU(inplace=True)
  (blocks): ModuleList(
    (0): ModuleList(
      (0): DBlock(
        (activation): ReLU(inplace=True)
        (downsample): AvgPool2d(kernel_size=2, stride=2, padding=0)
        (conv1): SNConv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv2): SNConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv_sc): SNConv2d(3, 64, kernel_size=(1, 1), stride=(1, 1))
      )
      (1): Quantize()
      (2): Attention(
        (theta): SNConv2d(64, 8, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (phi): SNConv2d(64, 8, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (g): SNConv2d(64, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (o): SNConv2d(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      )
    )
    (1): ModuleList(
      (0): DBlock(
        (activation): ReLU(inplace=True)
        (downsample): AvgPool2d(kernel_size=2, stride=2, padding=0)
        (conv1): SNConv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv2): SNConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv_sc): SNConv2d(64, 128, kernel_size=(1, 1), stride=(1, 1))
      )
      (1): Quantize()
    )
    (2): ModuleList(
      (0): DBlock(
        (activation): ReLU(inplace=True)
        (downsample): AvgPool2d(kernel_size=2, stride=2, padding=0)
        (conv1): SNConv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv2): SNConv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv_sc): SNConv2d(128, 256, kernel_size=(1, 1), stride=(1, 1))
      )
      (1): Quantize()
    )
    (3): ModuleList(
      (0): DBlock(
        (activation): ReLU(inplace=True)
        (downsample): AvgPool2d(kernel_size=2, stride=2, padding=0)
        (conv1): SNConv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv2): SNConv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv_sc): SNConv2d(256, 512, kernel_size=(1, 1), stride=(1, 1))
      )
      (1): Quantize()
    )
    (4): ModuleList(
      (0): DBlock(
        (activation): ReLU(inplace=True)
        (downsample): AvgPool2d(kernel_size=2, stride=2, padding=0)
        (conv1): SNConv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv2): SNConv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv_sc): SNConv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1))
      )
    )
    (5): ModuleList(
      (0): DBlock(
        (activation): ReLU(inplace=True)
        (conv1): SNConv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (conv2): SNConv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
    )
  )
  (linear): SNLinear(in_features=1024, out_features=1, bias=True)
  (embed): SNEmbedding(1000, 1024)
)
Number of params in G: 40247940 D: 39448258
Inception Metrics will be saved to logs/BigGAN_I128_hdf5_seed0_Gch64_Dch64_bs256_nDa4_nGa4_Gattn64_Dattn64_Commit15.00_Layer0123_Dicsz10_Dicdecay0.80_quant_log.jsonl
Training Metrics will be saved to logs/BigGAN_I128_hdf5_seed0_Gch64_Dch64_bs256_nDa4_nGa4_Gattn64_Dattn64_Commit15.00_Layer0123_Dicsz10_Dicdecay0.80_quant
Using dataset root location /filer/tmp2/an499_tmp2/ILSVRC128.hdf5
Using multiepoch sampler from start_itr 0...
Parallelizing Inception module...
Beginning training at epoch 0...
Length dataset output is 5000000
1/4883 (  0.00%) Traceback (most recent call last):
  File "train.py", line 227, in <module>
    main()
  File "train.py", line 224, in main
    run(config)
  File "train.py", line 184, in run
    metrics = train(x, y)
  File "/common/users/an499/papers/GGAN/GGAN_code/FQ-GAN/FQ-BigGAN/train_fns.py", line 45, in train
    x[counter], y[counter], train_G=False, split_D=config['split_D'])
  File "/common/users/an499/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/common/users/an499/py36/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
    outputs = self.parallel_apply(replicas, inputs, kwargs)
  File "/common/users/an499/py36/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
    return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
  File "/common/users/an499/py36/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply
    output.reraise()
  File "/common/users/an499/py36/lib/python3.6/site-packages/torch/_utils.py", line 394, in reraise
    raise self.exc_type(msg)
IndexError: Caught IndexError in replica 0 on device 0.
Original Traceback (most recent call last):
  File "/common/users/an499/py36/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
    output = module(*input, **kwargs)
  File "/common/users/an499/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/common/users/an499/papers/GGAN/GGAN_code/FQ-GAN/FQ-BigGAN/BigGAN.py", line 441, in forward
    G_z = self.G(z, self.G.shared(gy))
  File "/common/users/an499/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/common/users/an499/papers/GGAN/GGAN_code/FQ-GAN/FQ-BigGAN/BigGAN.py", line 240, in forward
    ys = [torch.cat([y, item], 1) for item in zs[1:]]
  File "/common/users/an499/papers/GGAN/GGAN_code/FQ-GAN/FQ-BigGAN/BigGAN.py", line 240, in <listcomp>
    ys = [torch.cat([y, item], 1) for item in zs[1:]]
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

gen_images

How can I use FQ-StyleGAN pretrianed model to generate 1000 images in 1024*1024 resolution?

FQ-U-GAT-IT pytorch

Great work!
Could you release the pytorch version of FQ-U-GAT-IT?
Thank you so much!

use quantized or not?

return loss, perplexity

quantized = inputs + tf.stop_gradient(quantized - inputs)

you return loss, perplexity but not quantized in Line 104, is it better to not use the quantized_x and only compute the quant_loss?

x = lrelu(x, 0.2)
for i in range(1, self.n_dis - 1):
x = conv(x, channel * 2, kernel=4, stride=2, pad=1, pad_type='reflect', sn=self.sn, scope='conv_' + str(i))
x = lrelu(x, 0.2)
if i in self.quant_layers:
diff, ppl = self.quantize[i](x, reuse, layer=i)
quant_loss += diff
channel = channel * 2
x = conv(x, channel * 2, kernel=4, stride=1, pad=1, pad_type='reflect', sn=self.sn, scope='conv_last')

A better Colab?

Hi - great work on this but can you make a better Colab version that's more end to end and can run without the user having to move datasets and models to the right place on their own? Preferably with the option to run inference on a custom image.

weird result

I tried the self2anime checkpoint and got this result
001

Request - Trained weights

Hello, thanks for sharing your research.
I'm studying Style transfer and would like to test with your trained weights, can you be so kind and share them?

metric of FQ-U-get-it

Hi! I have seen that you use KID as your experiment metric,but I can't know how can i get that result kid by the code you shared. Can you give me some help about how to calculate that.

WGAN problem

Hi! I can't train my model with your FQ-module based on WGAN, gradient explosion suffered, Can you train FQGAN based on WGAN?thank you very much ^_^

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.