This forked repository has updates for modelling Audio Textures. Please see the original official README from NVIDIA here for details on licenses and citations.
Note: This version of StyleGAN2 is not compatible with PyTorch>1.8. We use PyTorch 1.7 for all experiments.
- Clone this repo
- Install dependencies by creating a new conda environment called
audio-stylegan2
(Note: if you are using this repo in conjunction with audio-latent-composition, you do not need to recreate this. Both projects need the same environment setup)
conda env create -f environment.yml
Add the newly created environment to Jupyter Notebooks
python -m ipykernel install --user --name audio-stylegan2
Kickstart training using the command below. See config.json for various parameter settings.
Note this is unsupervised training. TODO: Integrate conditional training from original repo.
python main.py --data_dir=<data location> --out_dir=training-runs/<checkpoint location>
Notebooks outline how to generate randomly from trained GAN. Further, we use Phase Gradient Heap Integration (PGHI) method to invert Spectrograms to audio. See this paper and this paper for details. StyleGAN architectures for audio learn spectrogram representations as images and thus need to be scaled from [0,255] to [-50,0].
We use the SeFa algorithm from this paper to automatically find directions for controllability.
TODO: Add the notebook and streamlit interface