Git Product home page Git Product logo

insgen's Introduction

GenForce Lib for Generative Modeling

An efficient PyTorch library for deep generative modeling. May the Generative Force (GenForce) be with You.

image

Updates

  • Encoder Training: We support training encoders on top of pre-trained GANs for GAN inversion.
  • Model Converters: You can easily migrate your already started projects to this repository. Please check here for more details.

Highlights

  • Distributed training framework.
  • Fast training speed.
  • Modular design for prototyping new models.
  • Model zoo containing a rich set of pretrained GAN models, with Colab live demo to play.

Installation

  1. Create a virtual environment via conda.

    conda create -n genforce python=3.7
    conda activate genforce
  2. Install cuda and cudnn. (We use CUDA 10.0 in case you would like to use TensorFlow 1.15 for model conversion.)

    conda install cudatoolkit=10.0 cudnn=7.6.5
  3. Install torch and torchvision.

    pip install torch==1.7 torchvision==0.8
  4. Install requirements

    pip install -r requirements.txt

Quick Demo

We provide a quick training demo, scripts/stylegan_training_demo.py, which allows to train StyleGAN on a toy dataset (500 animeface images with 64 x 64 resolution). Try it via

./scripts/stylegan_training_demo.sh

We also provide an inference demo, synthesize.py, which allows to synthesize images with pre-trained models. Generated images can be found at work_dirs/synthesis_results/. Try it via

python synthesize.py stylegan_ffhq1024

You can also play the demo at Colab.

Play with GANs

Test

Pre-trained models can be found at model zoo.

  • On local machine:

    GPUS=8
    CONFIG=configs/stylegan_ffhq256_val.py
    WORK_DIR=work_dirs/stylegan_ffhq256_val
    CHECKPOINT=checkpoints/stylegan_ffhq256.pth
    ./scripts/dist_test.sh ${GPUS} ${CONFIG} ${WORK_DIR} ${CHECKPOINT}
  • Using slurm:

    CONFIG=configs/stylegan_ffhq256_val.py
    WORK_DIR=work_dirs/stylegan_ffhq256_val
    CHECKPOINT=checkpoints/stylegan_ffhq256.pth
    GPUS=8 ./scripts/slurm_test.sh ${PARTITION} ${JOB_NAME} \
        ${CONFIG} ${WORK_DIR} ${CHECKPOINT}

Train

All log files in the training process, such as log message, checkpoints, synthesis snapshots, etc, will be saved to the work directory.

  • On local machine:

    GPUS=8
    CONFIG=configs/stylegan_ffhq256.py
    WORK_DIR=work_dirs/stylegan_ffhq256_train
    ./scripts/dist_train.sh ${GPUS} ${CONFIG} ${WORK_DIR} \
        [--options additional_arguments]
  • Using slurm:

    CONFIG=configs/stylegan_ffhq256.py
    WORK_DIR=work_dirs/stylegan_ffhq256_train
    GPUS=8 ./scripts/slurm_train.sh ${PARTITION} ${JOB_NAME} \
        ${CONFIG} ${WORK_DIR} \
        [--options additional_arguments]

Play with Encoders for GAN Inversion

Train

  • On local machine:

    GPUS=8
    CONFIG=configs/stylegan_ffhq256_encoder_y.py
    WORK_DIR=work_dirs/stylegan_ffhq256_encoder_y
    ./scripts/dist_train.sh ${GPUS} ${CONFIG} ${WORK_DIR} \
        [--options additional_arguments]
  • Using slurm:

    CONFIG=configs/stylegan_ffhq256_encoder_y.py
    WORK_DIR=work_dirs/stylegan_ffhq256_encoder_y
    GPUS=8 ./scripts/slurm_train.sh ${PARTITION} ${JOB_NAME} \
        ${CONFIG} ${WORK_DIR} \
        [--options additional_arguments]

Contributors

Member Module
Yujun Shen models and running controllers
Yinghao Xu runner and loss functions
Ceyuan Yang data loader
Jiapeng Zhu evaluation metrics
Bolei Zhou cheerleader

NOTE: The above form only lists the person in charge for each module. We help each other a lot and develop as a TEAM.

We welcome external contributors to join us for improving this library.

License

The project is under the MIT License.

Acknowledgement

We thank PGGAN, StyleGAN, StyleGAN2, StyleGAN2-ADA for their work on high-quality image synthesis. We thank IDInvert and GHFeat for their contribution to GAN inversion. We also thank MMCV for the inspiration on the design of controllers.

BibTex

We open source this library to the community to facilitate the research of generative modeling. If you do like our work and use the codebase or models for your research, please cite our work as follows.

@misc{genforce2020,
  title =        {GenForce},
  author =       {Shen, Yujun and Xu, Yinghao and Yang, Ceyuan and Zhu, Jiapeng and Zhou, Bolei},
  howpublished = {\url{https://github.com/genforce/genforce}},
  year =         {2020}
}

insgen's People

Contributors

justimyhxu avatar limbo0000 avatar shenyujun avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

insgen's Issues

When the code will release?

Dear Genforce group,

Thank you for your great work of . I really like it.

When the code will release?

Thank you for your help.

Best Wishes,

Alex

How to train insgen with only 1 GPU

thanks for your sharing, but I only have 1 GPU , it can not be trained

I see the reason why need multi-GPU is for ‘effect of disabling shuffle BN to MoCo’

but I can not understand why must shuffle batch data among all gpus not only in GPU?

would you provide a way to shuffle batch date on 1 GPU, it can be not ‘effect’ ?

Training set selection

Hello,
Thanks for this awesome work, it really pushes the envelope for generation with few data. Would you mind providing the random indices used for training each subset for the ffhq models for example?

Training does not work with 1 GPU

There seems to be a problem with the contrastive loss when using 1 GPU to train, training only works when setting no_insgen=true.

The output is:

Setting up augmentation...
Distributing across 1 GPUs...
Distributing Contrastive Heads across 1 GPUS...
Setting up training phases...
Setting up contrastive training phases...
Exporting sample images...
Initializing logs...
2021-09-18 04:23:26.767334: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
Training for 25000 kimg...

Traceback (most recent call last):
  File "train.py", line 583, in <module>
    main() # pylint: disable=no-value-for-parameter
  File "/usr/lib/python3/dist-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/click/decorators.py", line 17, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "train.py", line 576, in main
    subprocess_fn(rank=0, args=args, temp_dir=temp_dir)
  File "train.py", line 421, in subprocess_fn
    training_loop.training_loop(rank=rank, **args)
  File "/home/katarina/ML/insgen/training/training_loop.py", line 326, in training_loop
    loss.accumulate_gradients(phase=phase.name, real_img=real_img, real_c=real_c, gen_z=gen_z, gen_c=gen_c, sync=sync, gain=gain, cl_phases=cl_phases, D_ema=D_ema, g_fake_cl=not no_cl_on_g, **cl_loss_weight)
  File "/home/katarina/ML/insgen/training/contrastive_loss.py", line 156, in accumulate_gradients
    loss_Dreal = loss_Dreal + lw_real_cl * self.run_cl(real_img_tmp, real_c, sync, Dphase.module, D_ema, loss_name='D_cl')
  File "/home/katarina/ML/insgen/training/contrastive_loss.py", line 71, in run_cl
    loss = contrastive_head(logits0, logits1, loss_only=loss_only, update_q=update_q)
  File "/home/katarina/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/katarina/ML/insgen/training/contrastive_head.py", line 183, in forward
    self._dequeue_and_enqueue(k)
  File "/home/katarina/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/home/katarina/ML/insgen/training/contrastive_head.py", line 51, in _dequeue_and_enqueue
    keys = concat_all_gather(keys)
  File "/home/katarina/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/home/katarina/ML/insgen/training/contrastive_head.py", line 197, in concat_all_gather
    for _ in range(torch.distributed.get_world_size())]
  File "/home/katarina/.local/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 748, in get_world_size
    return _get_group_size(group)
  File "/home/katarina/.local/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 274, in _get_group_size
    default_pg = _get_default_group()
  File "/home/katarina/.local/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 358, in _get_default_group
    raise RuntimeError("Default process group has not been initialized, "
RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.

Reproducibility of FFHQ256-2K FID

Hi,
I have a question about the results of the experiment.

I got the best FID 22.4 instead of the reported score 11.92 in paper.

image

I use the following command.
"python train.py --gpus=8 --data=/FFHQ256_2k.zip --cfg=paper256 --outdir=InsG_2k --mirror=1"

And I generated the FFHQ-2k data according to the official styleGAN-ada GitHub. (https://github.com/NVlabs/stylegan2-ada-pytorch)

What should I do for scores on paper?

KeyError : 'D_ema' : error when training with --resume options

Hi, thanks for your wonderful work.

I have some issues when I try to use my custom dataset.

First. the command that you mentioned on README works for my custom dataset.
python train.py --gpus=8
--data=${DATA_PATH}
--cfg=paper256
--outdir=training_example

However, when I try to use pretrained model(ffhq256) and retrain with my custom dataset it gave me some error message.
The command that I tired is
CUDA_VISIBLE_DEVICES=0,1 python3 train.py --gpus=2 --data=${DATA_PATH} --cfg=paper256 --resume=ffhq256 --outdir=training_example

And the error message that I got is this
image

I hope that I can get any solution!

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.