Implementation of GigaGAN (project page), new SOTA GAN out of Adobe.
I will also add a few findings from lightweight gan, for faster convergence (skip layer excitation), better stability (reconstruction auxiliary loss in discriminator), as well as improved results (GLU in generator).
It will also contain the code for the 1k - 4k upsamplers, which I find to be the highlight of this paper.
Please join if you are interested in helping out with the replication with the LAION community
-
StabilityAI for the sponsorship, as well as my other sponsors, for affording me the independence to open source artificial intelligence.
-
๐ค Huggingface for their accelerate library
-
All the maintainers at OpenClip, for their SOTA open sourced contrastive learning text-image models
- make sure it can be trained unconditionally
- read the relevant papers and knock out all 3 auxiliary losses
- matching aware loss
- clip loss
- vision-aided adversarial loss
- add reconstruction losses on arbitrary stages in the discriminator (lightweight gan)
- get a code review for the multi-scale inputs and outputs, as the paper was a bit vague
- port over CLI from lightweight|stylegan2-pytorch
- hook up laion dataset for text-image
@misc{https://doi.org/10.48550/arxiv.2303.05511,
url = {https://arxiv.org/abs/2303.05511},
author = {Kang, Minguk and Zhu, Jun-Yan and Zhang, Richard and Park, Jaesik and Shechtman, Eli and Paris, Sylvain and Park, Taesung},
title = {Scaling up GANs for Text-to-Image Synthesis},
publisher = {arXiv},
year = {2023},
copyright = {arXiv.org perpetual, non-exclusive license}
}
@article{Liu2021TowardsFA,
title = {Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis},
author = {Bingchen Liu and Yizhe Zhu and Kunpeng Song and A. Elgammal},
journal = {ArXiv},
year = {2021},
volume = {abs/2101.04775}
}