PyTorch implementation of PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION
YOUR CONTRIBUTION IS INVALUABLE FOR THIS PROJECT :)
- to be updated...
[step 1.] Prepare dataset
CelebA-HQ dataset is not available yet, so I used 100,000 generated PNGs of CelebA-HQ released by the author.
The quality of the generated image was good enough for training and verifying the preformance of the code.
If the CelebA-HQ dataset is releasted in then near future, I will update the experimental result.
[download]
- CAUTION: loading 1024 x 1024 image and resizing every forward process makes training slow. I recommend you to use normal CelebA dataset until the output resolution converges to 256x256.
---------------------------------------------
The training data folder should look like :
<train_data_root>
|--classA
|--image1A
|--image2A ...
|--classB
|--image1B
|--image2B ...
---------------------------------------------
[step 2.] Prepare environment using virtualenv
- you can easily set PyTorch and TensorFlow evnironment using virtualenv.
- CAUTION: if you have trouble installing PyTorch, install it mansually using pip. [PyTorch Install]
$ virtualenv --python=python2.7 venv
$ . venv/bin/activate
$ pip install -r requirements.txt
[step 3.] Run training
- edit
config.py
to change parameters. (don't forget to change path to training images) - specify which gpu devices to be used, and change "n_gpu" option in
config.py
to support Multi-GPU training. - run and enjoy!
(example)
If using Single-GPU (device_id = 0):
$ vim config.py --> change "n_gpu=1"
$ CUDA_VISIBLE_DEVICES=0 python trainer.py
If using Multi-GPUs (device id = 1,3,7):
$ vim config.py --> change "n_gpu=3"
$ CUDA_VISIBLE_DEVICES=1,3,7 python trainer.py
[step 4.] Display on tensorboard
- you can check the results on tensorboard.
$ tensorboard --logdir repo/tensorboard --port 8888
$ <host_ip>:8888 at your browser.
The model is still being trained at this moment.
The result of higher resolution will be updated soon.
- Equalized learning rate (weight normalization)
- Pixel-wise normalization
- Support WGAN-GP loss
MinchulShin, @nashory