Git Product home page Git Product logo

torchgan / torchgan Goto Github PK

View Code? Open in Web Editor NEW
1.4K 29.0 168.0 23.45 MB

Research Framework for easy and efficient training of GANs based on Pytorch

Home Page: https://torchgan.readthedocs.io/en/latest/

License: MIT License

Python 67.09% Jupyter Notebook 29.39% TeX 3.52%
machine-learning computer-vision generative-adversarial-networks python3 pytorch neural-networks deep-learning generative-model python gans

torchgan's Introduction

TorchGAN

Framework for easy and efficient training of GANs based on Pytorch

Project Status: Active – The project has reached a stable, usable state and is being actively developed. Downloads Downloads Downloads License DOI

Stable Documentation Latest Documentation Codecov Binder Open In Colab PyPI version

TorchGAN is a Pytorch based framework for designing and developing Generative Adversarial Networks. This framework has been designed to provide building blocks for popular GANs and also to allow customization for cutting edge research. Using TorchGAN's modular structure allows

  • Trying out popular GAN models on your dataset.
  • Plug in your new Loss Function, new Architecture, etc. with the traditional ones.
  • Seamlessly visualize the training with a variety of logging backends.
System / PyTorch Version 1.8 1.9 nightly
Linux py3.8 CI Testing CI Testing CI Testing
Linux py3.9 CI Testing CI Testing CI Testing
OSX py3.8 CI Testing CI Testing CI Testing
OSX py3.9 CI Testing CI Testing CI Testing
Windows py3.9 CI Testing CI Testing CI Testing
Windows py3.9 CI Testing CI Testing CI Testing

Installation

Using pip (for stable release):

  $ pip install torchgan

Using pip (for latest master):

  $ pip install git+https://github.com/torchgan/torchgan.git

From source:

  $ git clone https://github.com/torchgan/torchgan.git
  $ cd torchgan
  $ python setup.py install

Documentation

The documentation is available here

The documentation for this package can be generated locally.

  $ git clone https://github.com/torchgan/torchgan.git
  $ cd torchgan/docs
  $ pip install -r requirements.txt
  $ make html

Now open the corresponding file from build directory.

Tutorials

Binder Open In Colab

The tutorials directory contain a set of tutorials to get you started with torchgan. These tutorials can be run using Google Colab or Binder. It is highly recommended that you follow the tutorials in the following order.

  1. Introductory Tutorials:
  2. Intermediate Tutorials:
  3. Advanced Tutorials:

Supporting and Citing

This software was developed as part of academic research. If you would like to help support it, please star the repository. If you use this software as part of your research, teaching, or other activities, we would be grateful if you could cite the following:

@article{Pal2021,
  doi = {10.21105/joss.02606},
  url = {https://doi.org/10.21105/joss.02606},
  year = {2021},
  publisher = {The Open Journal},
  volume = {6},
  number = {66},
  pages = {2606},
  author = {Avik Pal and Aniket Das},
  title = {TorchGAN: A Flexible Framework for GAN Training and Evaluation},
  journal = {Journal of Open Source Software}
}

List of publications & submissions using TorchGAN (please open a pull request to add missing entries):

Contributing

We appreciate all contributions. If you are planning to contribute bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us. For more detailed guidelines head over to the official documentation.

Contributors

This package has been developed by

  • Avik Pal (@avik-pal)
  • Aniket Das (@Aniket1998)

This project exists thanks to all the people who contribute.

torchgan's People

Contributors

aniket1998 avatar avik-pal avatar avinandan22 avatar dependabot[bot] avatar jspisak avatar monkeywithacupcake avatar namanbiyani avatar shi-weili avatar xiangyyang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

torchgan's Issues

WGAN Penalty not Working

import torch
import torchvision
from torch.autograd import Variable
from torch.optim import Adam, SGD
import torch.utils.data as data
import torchvision.datasets as dsets
import torchvision.transforms as transforms

from torchgan import *
from torchgan.models import Generator, Discriminator,\
                            SmallDCGANGenerator, SmallDCGANDiscriminator
from torchgan.losses import WassersteinGeneratorLoss, WassersteinDiscriminatorLoss
from torchgan.trainer import Trainer, WGANGP_Trainer

def get_dataset():
    train_dataset = dsets.MNIST(root='/data/avikpal', train=True,
                                transform=transforms.Compose([transforms.Pad((2, 2)),
                                                              transforms.ToTensor(),
                                                              transforms.Normalize(mean = (0.0, 0.0, 0.0), std = (1.0, 1.0, 1.0))]),
                                download=True)
    train_loader = data.DataLoader(train_dataset, batch_size=128, shuffle=True)
    return train_loader

trainer = WGANGP_Trainer(10, SmallDCGANGenerator(out_channels=1), SmallDCGANDiscriminator(in_channels=1), Adam, Adam, LeastSquaresGeneratorLoss, LeastSquaresDiscriminatorLoss, sample_size=64, epochs=5, verbose=5, device=torch.device("cuda:2"), checkpoints="./model/gan2", recon="./images2", optimizer_generator_options={"lr": 0.001}, optimizer_discriminator_options={"lr": 0.01})

trainer(get_dataset)
RuntimeError: One of the differentiated Tensors does not require grad

Pytorch version

Hello, I have got a torchgan base with version v0.0.4. Which version of pytorch should be used? I did not find any documentation on this.

best,
Sixin

GAN Models

List of GAN Models we want to have before v0.1

  • DCGAN
  • Conditional GAN
  • Autoencoder Model for EBGAN and BEGAN
  • Self Attention GAN

@avik-pal Add any other models you find relevant.

Fix directory creation for checkpoints

The code doesn't respect the way we accept the directory name for the checkpoints. The name of the directory is the part of the str appearing before the /. Should be trivial to fix.

Resuming training is unintuitive

Describe the bug
The process to resume training a previously trained model is unintuitive.

To Reproduce
Steps to reproduce the behavior:

  1. Package Versions: torchgan 0.1.0, torchvision 0.13.1, pytorch 1.12.1
  2. Logging Configurations:
    print(torchgan.logging.backends.CONSOLE_LOGGING)
    1
    print(torchgan.logging.backends.VISDOM_LOGGING)
    0
    print(torchgan.logging.backends.TENSORBOARD_LOGGING)
    0
  3. Minimal Working Example for the error
    The issue can be easily encountered by slightly modifying Tutorial 1. Follow the tutorial normally, until you reach the "Visualizing the Samples" section. Before that section, add the following code cell:
    trainer.load_model("./model/gan4.model")
    trainer(dataloader)
    Now execute the new cell.

Expected behavior
This should have continued training the model for an additional 10 generations on CUDA, or 5 generations otherwise. However, because the Trainer's epochs parameter represents the total number of epochs to train, the function returns without any further training done. In order to achieve the expected behaviour, the user must instead create a new Trainer object and pass (current_epochs + desired_additional_epochs) as the value of the epochs parameter. This is unintuitive, and requires the user to manually keep track of how many epochs have been completed when they end a training session if they plan on continuing it later.

Desktop (please complete the following information):

  • OS: Windows 10 Pro, version 21H2

Installation

  • Pip

Additional context
Fixing this would involve rewriting significant portions of the BaseTrainer class. I would suggest allowing the user to pass in a number of epochs to train in the __call__() function, rather than having it set at object creation.

Fix Gradient Visualization

Currently, instead of displaying the gradients norm we are showing the norm of the model parameters.
Problems in fixing this simply:

  1. We are clearing the gradient buffers every time; this makes it impossible to track gradients for different losses.
  2. Printing the gradient wrt only the last executed loss will lead to wrong inference on the part of the user.
  3. Can't make gradients a property of the loss as it makes custom losses a nightmare to implement (we will have to impose a lot of constraints).

Two Time-Step Update Rule

I cannot find any reference to TTUR in the documentation (see more about the technique here https://arxiv.org/abs/1706.08500). Would it be possible to implement it for WGAN-GP for torchgan? I'm new to the framework but I think the right place to implement it would be in torchgan.losses.WassersteinGradientPenalty.

Progressive growing of GANs?

I've been using TorchGAN to train the global generator/discriminator of pix2pixHD and got great results. I wonder if it's supported to load the pre-trained global models, add local enhancer layers on top of it, and continue the training?

If the current trainer doesn't support this, could I have some directions on how to implement it? Thanks!

Docs should emphasize loss functions with logits

I noticed that minimax_discriminator_loss uses binary_cross_entropy_with_logits (cf. binary_cross_entropy) which "combines a Sigmoid layer and the BCELoss" according to the docs.

This is a good practice when it comes to numerical stability, but maybe emphasizing this in the docs would be a good idea to prevent users from inserting a sigmoid as the final non-linearity in custom models. Maybe even raising a warning if we can somehow detect this.

No easy support for custom noise priors

The loss-ops and sampling from the noise priors are too entangled with one another, and the entire library is written keeping a Normal Distribution in mind for the noise prior. This leads to inconveniences such as overriding the entire train ops of a loss (and even the sampler) just to change something as minor as changing the noise prior from Normal to Uniform.

I propose having a separate set of modules for priors, keeping both fixed and learnable priors in mind. Will submit a PR regarding the same.

AttributeError: 'NBMasterBar' object has no attribute 'first_bar'

Describe the bug
An error will be reported if the 0.2.0 version of fastprogress is installed

Traceback (most recent call last):
  File "dc_gan.py", line 205, in <module>
    trainer(dataloader)
  File "/usr/local/anaconda3/envs/pt1.0py37/lib/python3.7/site-packages/torchgan/trainer/base_trainer.py", line 455, in __call__
    self.train(data_loader, **kwargs)
  File "/usr/local/anaconda3/envs/pt1.0py37/lib/python3.7/site-packages/torchgan/trainer/base_trainer.py", line 400, in train
    master_bar_iter.first_bar.comment = f"Training Progress"
AttributeError: 'ConsoleMasterBar' object has no attribute 'first_bar'

I used the 0.2.0 version fastprogress to create a simple example

from fastprogress import master_bar, progress_bar
master_bar_iter = master_bar(range(0, 3))
master_bar_iter.first_bar.comment = f'first bar stat'

Produced a similar error

AttributeError                            Traceback (most recent call last)
<ipython-input-3-ffd0b33d372e> in <module>
----> 1 master_bar_iter.first_bar.comment = f'first bar stat'

AttributeError: 'ConsoleMasterBar' object has no attribute 'first_bar'

I won't get an error when I install 0.1.20 version fastprogress

pip install fastprogress==0.1.20

Loss Functions

List of Loss Functions we want to have before v0.1 arranged roughly in order of priority

  • Minimax Discriminator Loss
  • Minimax Generator Loss
  • Minimax Discriminator Loss with Nonsaturating Heuristic
  • Minimax Generator Loss with Nonsaturating Heuristic
  • Wasserstein Discriminator Loss
  • Wasserstein Generator Loss
  • Wasserstein Gradient Penalty
  • Least Squares Discriminator Loss
  • Least Squares Generator Loss
  • Mutual Information Penalty
  • EBGAN Discriminator Loss
  • EBGAN Generator Loss
  • ACGAN Discriminator Loss
  • ACGAN Generator Loss
  • EBGAN Repelling Regularizer
  • DRAGAN Discriminator Loss

@Aniket1998 add more losses you find relevant and needed for initial release.

Expose functional API for losses

This needs to be done in 2 parts:

  • Separate out the functional forms of the losses into a module of losses
  • Document the necessary functions

We don't need to expose all the functions. Some functions do nothing fancy and they need to be removed and the entire computation can be performed inside the forward function.

Some confusion regarding the implementation of virtual batch norm.

Describe the bug
Some confusion regarding the implementation of virtual batch norm.

  1. Referring to another code: https://github.com/dansuh17/segan-pytorch/blob/938b6c837b6f091d0ad1e67fad9db32045a90151/vbnorm.py#L68 , it seems that in the torchgan version the mean and deviation are set to None after a batch has passed. in the next batch these will be computed for that sample and applied, and not from a "virtual batch".
    self.ref_var = None
    . so it seems for an alternate batch it uses statistics from that batch, and for the rest from the previous batch.
  2. even when the statistics are updated they are directly copied from a batch as opposed to interpolated as in https://github.com/dansuh17/segan-pytorch/blob/938b6c837b6f091d0ad1e67fad9db32045a90151/vbnorm.py#L66

exect trainer = Trainer(dcgan_network, lsgan_losses, sample_size=64, epochs=epochs, device=device) errors:

Traceback (most recent call last):
File "/home/movan/anaconda3/envs/tensorflow/lib/python3.6/site-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/home/movan/anaconda3/envs/tensorflow/lib/python3.6/site-packages/urllib3/connectionpool.py", line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/movan/anaconda3/envs/tensorflow/lib/python3.6/http/client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/movan/anaconda3/envs/tensorflow/lib/python3.6/http/client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/movan/anaconda3/envs/tensorflow/lib/python3.6/http/client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/movan/anaconda3/envs/tensorflow/lib/python3.6/http/client.py", line 1026, in _send_output
self.send(msg)
File "/home/movan/anaconda3/envs/tensorflow/lib/python3.6/http/client.py", line 964, in send
self.connect()
File "/home/movan/anaconda3/envs/tensorflow/lib/python3.6/site-packages/urllib3/connection.py", line 181, in connect
conn = self._new_conn()
File "/home/movan/anaconda3/envs/tensorflow/lib/python3.6/site-packages/urllib3/connection.py", line 168, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f75df0ba6a0>: Failed to establish a new connection: [Errno 111] Connection refused

JOSS Review: Code coverage

Issue Description

I am reviewing the JOSS submission linked to this repository, see: Joss Review

Running

python -m pytest --cov=torchgan /tests --cov-report term-missing

showcases that the test coverage for the losses module for the repository is quite low. Are you planning on expanding your loss test cases? I think this is important for this type of package as it drives the model training, and is such linked to the checkbox "functionality" for the review.

Improve image sampling in the trainer

The exact approach to sampling images from the trainer at the end of every epoch depends on what the generator requires as input during its forward pass. The current sample_images method is mainly geared towards generators that accept noise input (with a small workaround for generators that require labels). This needs to be generalised properly and be made more flexible to allow users to have greater control on the sampling process.

Mistake in CycleGAN tutorial about identity loss

In CycleGAN Tutorial.ipynb, the description and code about identity loss are wrong.
Current code is :
fake_a = gen_b2a(image_b)
fake_b = gen_a2b(image_a)
loss_identity = 0.5 * (F.l1_loss(fake_a, image_a) + F.l1_loss(fake_b, image_b))

The correct code should be:
loss_identity = 0.5 * (F.l1_loss(gen_b2a(image_a), image_a) + F.l1_loss(gen_a2b(image_b), image_b))

DCGAN with images larger than 32x32

Describe the bug
Getting the following error when trying to train on images of size larger than 32x32:
"RuntimeError: shape '[64]' is invalid for input of size 1600"

To Reproduce
My hyperparameters for the networks as well as the data I am loading are below

tfs = transforms.Compose([
    transforms.Resize(64),
    transforms.ToTensor(),
    transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
train_dataset = torchvision.datasets.ImageFolder(root=data_dir+"train", transform=tfs)
val_dataset = torchvision.datasets.ImageFolder(root=data_dir+"/val", transform=tfs)
test_dataset = torchvision.datasets.ImageFolder(root=data_dir+"/test", transform=tfs)

dataset = train_dataset

dataloader = torch.utils.data.DataLoader(dataset, batch_size=64, shuffle=True, num_workers=4)


dcgan_network = {
    "generator": {
        "name": DCGANGenerator,
        "args": {
            "encoding_dims": 100,
            "out_channels": 3,
            "step_channels": 64,
            "nonlinearity": nn.LeakyReLU(0.2),
            "last_nonlinearity": nn.Tanh(),
        },
        "optimizer": {"name": Adam, "args": {"lr": 0.0001, "betas": (0.5, 0.999)}},
    },
    "discriminator": {
        "name": DCGANDiscriminator,
        "args": {
            "in_channels": 3,
            "step_channels": 64,
            "nonlinearity": nn.LeakyReLU(0.2),
            "last_nonlinearity": nn.LeakyReLU(0.2),
        },
        "optimizer": {"name": Adam, "args": {"lr": 0.0003, "betas": (0.5, 0.999)}},
    },
}```

**Expected behavior**
Normal training as usual

**Installation**
- Conda

Examples on Multivariate Time Series

The current framework is too much oriented towards Computer Vision problems where there is already lots of available resources. It would be great if you can also focus on multivariate time series where Generative Adversarial Networks can have several very important uses in data generation, augmentation, imputation, denoising and detecting anomalies for several applications such as medical, financial, etc.. It would be great if you can add examples for few of these use-cases. Enclosed is such multi-variate time series dataset consisting of log-returns where cubic interpolation is employed for imputation and market with a Boolean, I am sending it in case you would like to have an example for such a common use-case in financial world.

Dataset.zip

tests should be installed to torchgan subfolder

Describe the bug
torchgan install tests to site-packages folder, not site-packages/torchgan
To Reproduce
Steps to reproduce the behavior:

  1. torchgan 0.0.2.r33.g8c2bc99

Expected behavior
A clear and concise description of what you expected to happen.

Desktop (please complete the following information):

  • OS: ArchLinux

Installation

  • Build from source, see build() and package() function in PKGBUILD.

Additional context
You might consider moving tests folder to torchgan folder.

Evaluation Metrics

The following evaluation metrics need to be implemented

  • Inception Score
  • Frechet Distance
  • Classifier Score measured with a custom user-trained classifier
  • Sliced Wasserstein Distance

@avik-pal Add any more metrics that you think, should be part of the list

Refactor the Logger

The Logger implemented in #50 was just to get the things running. It is not very user-friendly making it very difficult to use custom Visualizers, even extending the logger to new Visualizers by making modifications in the core library.

Support different backends for Visualization

#50 attempts to add support for 2 backends. Also, we drop any dependency on plotting and visualization library through this PR. In future, we would like to support other plotting backends.
Some of the prioritised backends are:-

  • Tensorboard
  • Visdom
  • Matplotlib
  • Seaborn
  • Bokeh

However, this is not a high priority issue and will not be dealt with till a stable version has been released.

Improve Documentation

Since we are pushing for the release and putting more focus on feature completeness, I am listing all the current modules. All of these need to be documented before the next release can be tagged.

  • Loss Functions: New functions lack the equations. We also don't have a dedicated documentation for the functional forms of the losses. However, the train_ops need to have proper documentation as it is necessary for customizability.
  • Models: Models are fairly well documented. They just need some minor clean ups.
  • Metrics: The metrics API is a bit difficult to use. If we can't find a cleaner way to deal with metrics the only way is to have well-documented examples.
  • Logger: Lacks Documentation
  • Layers: Lacks Documentation
  • Trainer: Documentation is fine. However, needs a few demonstrative examples.

Add tests for Loss Functions

These need to be numeric tests and ideally tested with implementations across other frameworks for consistency checking.

Trainer can't deal with Losses with kwargs

We need to modify the _get_argument_maps() function to handle this. We should check if the argument is optional and avoid throwing error in that case.
This should be pretty easy to solve. We just need to pass everything as a keyword argument instead of a positional argument.

Small mistake in residual.py

Hi, you have a mistake in the conditions at row 75 and 167 in torchgan/layers/residual.py.

You want if i != len(filters) - 1 :)

Document the Code Properly

The code is being documented along with the PRs. However, we need to portray some good examples and also use latex formatting wherever needed.
This will be done once we roll out v0.1-alpha.
Hence not a high priority task. However, individual methods need to be documented from the start.

Multi-attribute GAN support

Is your feature request related to a problem? Please describe.
I successfully use DCGAN to generate avatars trained by cartoonset (https://google.github.io/cartoonset/). I want to make an improvement that allows specifying attributes such as hair color (among 10 choices) and face shape (among 7 choices), not just randomly generating an avatar.

Describe the solution you'd like
Could you please add one GAN network to support multiple attributes?

Describe alternatives you've considered
I tried CGAN but it seems that it only supports one attribute at a time.

run on the 2080ti RuntimeError: cublas runtime error : the GPU program failed to execute at /pytorch/aten/src/THC/THCBlas.cu:116

Running example/self-attention GAN error
Traceback (most recent call last):
File "/home1/code/test_gan/self-att-GAN/self-attGAN.py", line 121, in
trainer(dataloader)
File "/home/xujin/anaconda3/lib/python3.6/site-packages/torchgan-0.0.2-py3.6.egg/torchgan/trainer/trainer.py", line 454, in call
File "/home/xujin/anaconda3/lib/python3.6/site-packages/torchgan-0.0.2-py3.6.egg/torchgan/trainer/trainer.py", line 416, in train
File "/home/xujin/anaconda3/lib/python3.6/site-packages/torchgan-0.0.2-py3.6.egg/torchgan/trainer/trainer.py", line 343, in train_iter
File "/home/xujin/anaconda3/lib/python3.6/site-packages/torchgan-0.0.2-py3.6.egg/torchgan/losses/loss.py", line 76, in train_ops
File "/home/xujin/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home1/code/test_gan/self-att-GAN/self-attGAN.py", line 42, in forward
return self.model(x)
File "/home/xujin/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/xujin/anaconda3/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/xujin/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/xujin/anaconda3/lib/python3.6/site-packages/torchgan-0.0.2-py3.6.egg/torchgan/layers/spectralnorm.py", line 73, in forward
RuntimeError: cublas runtime error : the GPU program failed to execute at /pytorch/aten/src/THC/THCBlas.cu:116

Fix coverage

The coverage results I have tested locally are around 70% but in travis they seem to be reported as low as 22%. Apparently none od the files inside torchgan are being covered.

AttributeError while trying to use torchgan.layers.VirtualBatchNorm

I am getting the error AttributeError: 'VirtualBatchNorm' object has no attribute 'weight' while using the module to implement virtual batch normalization. For the context, here is my weights_init function:
`

def weights_init(m):
classname = m.class.name
print(classname)
if classname.find('Conv') != -1:
nn.init.normal_(m.weight.data, 0.0, 0.02)
elif classname.find('BatchNorm') != -1:
nn.init.normal_(m.weight.data, 1.0, 0.02)
nn.init.constant_(m.bias.data, 0)`
python version -3.7.4
pyTorch version-1.5.0

Windows 10 installation not working

Can torchgan be installed for win10 ?
Are there ways to install torchgan in win10 ? I installed anaconda 3-4.2.0 with python3.5, tensorflow, torch and torchvision.

InfoGAN model error

dist = torch.distributions.one_hot_categorical.OneHotCategorical(torch.tensor([0.1]*10))

lossf = MutualInformationPenalty()
z = torch.randn(2, 100)
c_cont = torch.randn(2, 30)
c_dis = dist.sample([2])
print(c_dis)

G = InfoGANGenerator(10, 30)
D = InfoGANDiscriminator(10, 30)

fake = G(z, c_dis, c_cont)

score, q_categorical, q_gaussian = D(fake, return_latents=True)

loss = lossf(c_dis, c_cont, q_categorical, q_gaussian)
loss.backward()
> Traceback (most recent call last):
  File "test.py", line 40, in <module>
    loss.backward()
  File "/usr/local/lib/python3.5/dist-packages/torch/tensor.py", line 102, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/usr/local/lib/python3.5/dist-packages/torch/autograd/__init__.py", line 90, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Hi,

There are quite a few errors in the documentation. And it took me a lot of time to build the model correctly.

When I used the above code, there were some errors and I was wondering if it was my usage error.

Thanks.

Deprecate the Trainer

Background

TorchGAN started out with a primary focus on GANs. But recent literature has suggested that GANs are being used in time series modeling, NLP, etc. Hence the name Trainer sends the wrong message.

Proposal

  1. Deprecate Trainer and ParallelTrainer in v0.0.4 and completely remove them in v0.0.5+
  2. Introduce an ImageTrainer and ParallelImageTrainer which are exact copies of the current version of Trainer and ParallelTrainer.
  3. Introduce application-specific Trainers to broaden the scope of TorchGAN. #110 discusses this.

Resuming training is not reflected in Logger

When training is resumed using load_model, there is no change in the state of the Logger. So if we start to retrain at epoch x, the images and all step value will be stored starting again from 1.

GAN models for generating HD images

Thanks a lot for this project! Is there any possibility to add some models which are designed for generating HD images i.e. StyleGAN (StyleGAN2, StyleGAN2-ADA)?

Thanks in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.