Git Product home page Git Product logo

audioldm's Introduction

🔉 Audio Generation with AudioLDM

arXiv githubio Hugging Face Spaces Open In Colab Replicate

Generate speech, sound effects, music and beyond.

This repo currently support:

  • Text-to-Audio Generation: Generate audio given text input.
  • Audio-to-Audio Generation: Given an audio, generate another audio that contain the same type of sound.
  • Text-guided Audio-to-Audio Style Transfer: Transfer the sound of an audio into another one using the text description.

Important tricks to make your generated audio sound better

  1. Try to provide more hints to AudioLDM, such as using more adjectives to describe your sound (e.g., clearly, high quality) or make your target more specific (e.g., "water stream in a forest" instead of "stream"). This can make sure AudioLDM understand what you want.
  2. Try to use different random seeds, which can affect the generation quality significantly sometimes.
  3. It's best to use general terms like 'man' or 'woman' instead of specific names for individuals or abstract objects that humans may not be familiar with.

Change Log

2023-04-10: Try to finetune AudioLDM with MusicCaps and AudioCaps datasets. Add three more checkpoints, including audioldm-m-text-ft, audioldm-s-text-ft, and audioldm-m-full.

2023-03-04: Add two more checkpoints, one is small model with more training steps, another is a large model. Add model selection in the Gradio APP.

2023-02-24: Add audio-to-audio generation. Add test cases. Add a pipeline (python function) for audio super-resolution and inpainting.

2023-02-15: Add audio style transfer. Add more options on generation.

Web APP

The web APP currently only support Text-to-Audio generation. For full functionality please refer to the Commandline Usage.

  1. Prepare running environment
conda create -n audioldm python=3.8; conda activate audioldm
pip3 install audioldm
git clone https://github.com/haoheliu/AudioLDM; cd AudioLDM
  1. Start the web application (powered by Gradio)
python3 app.py
  1. A link will be printed out. Click the link to open the browser and play.

Commandline Usage

Prepare running environment

# Optional
conda create -n audioldm python=3.8; conda activate audioldm
# Install AudioLDM
pip3 install audioldm

🌟 Text-to-Audio Generation: generate an audio guided by a text

# The default --mode is "generation"
audioldm -t "A hammer is hitting a wooden surface" 
# Result will be saved in "./output/generation"

🌟 Audio-to-Audio Generation: generate an audio guided by an audio (output will have similar audio events as the input audio file).

audioldm --file_path trumpet.wav
# Result will be saved in "./output/generation_audio_to_audio/trumpet"

🌟 Text-guided Audio-to-Audio Style Transfer

# Test run
# --file_path is the original audio file for transfer
# -t is the text AudioLDM uses for transfer. 
# Please make sure that --file_path exist
audioldm --mode "transfer" --file_path trumpet.wav -t "Children Singing" 
# Result will be saved in "./output/transfer/trumpet"

# Tune the value of --transfer_strength is important!
# --transfer_strength: A value between 0 and 1. 0 means original audio without transfer, 1 means completely transfer to the audio indicated by text
audioldm --mode "transfer" --file_path trumpet.wav -t "Children Singing" --transfer_strength 0.25

⚙️ How to choose between different model checkpoints?

# Add the --model_name parameter, choice={audioldm-m-text-ft, audioldm-s-text-ft, audioldm-m-full, audioldm-s-full,audioldm-l-full,audioldm-s-full-v2}
audioldm --model_name audioldm-s-full
  • ⭐ audioldm-m-full (default, recommend): the medium AudioLDM without finetuning and trained with audio embeddings as condition (added 2023-04-10).
  • ⭐ audioldm-s-full (recommend): the original open-sourced version (added 2023-02-01).
  • ⭐ audioldm-s-full-v2 (recommend): more training steps comparing with audioldm-s-full (added 2023-03-04).
  • audioldm-s-text-ft: the small AudioLDM finetuned with AudioCaps and MusicCaps audio-text pairs (added 2023-04-10).
  • audioldm-m-text-ft: the medium large AudioLDM finetuned with AudioCaps and MusicCaps audio-text pairs (added 2023-04-10).
  • audioldm-l-full: larger model comparing with audioldm-s-full (added 2023-03-04).

@haoheliu personally did a evaluation regarding the overall quality of the checkpoint, which gives audioldm-m-full (6.85/10), audioldm-s-full (6.62/10), audioldm-s-text-ft (6/10), audioldm-m-text-ft (5.46/10). These score are only for reference and may not reflect the true performance of the checkpoint. Checkpoint performance also varying with different text input as well.

❔ For more options on guidance scale, batchsize, seed, ddim steps, etc., please run

audioldm -h
usage: audioldm [-h] [--mode {generation,transfer}] [-t TEXT] [-f FILE_PATH] [--transfer_strength TRANSFER_STRENGTH] [-s SAVE_PATH] [--model_name {audioldm-s-full,audioldm-l-full,audioldm-s-full-v2}] [-ckpt CKPT_PATH]
                [-b BATCHSIZE] [--ddim_steps DDIM_STEPS] [-gs GUIDANCE_SCALE] [-dur DURATION] [-n N_CANDIDATE_GEN_PER_TEXT] [--seed SEED]

optional arguments:
  -h, --help            show this help message and exit
  --mode {generation,transfer}
                        generation: text-to-audio generation; transfer: style transfer
  -t TEXT, --text TEXT  Text prompt to the model for audio generation, DEFAULT ""
  -f FILE_PATH, --file_path FILE_PATH
                        (--mode transfer): Original audio file for style transfer; Or (--mode generation): the guidance audio file for generating simialr audio, DEFAULT None
  --transfer_strength TRANSFER_STRENGTH
                        A value between 0 and 1. 0 means original audio without transfer, 1 means completely transfer to the audio indicated by text, DEFAULT 0.5
  -s SAVE_PATH, --save_path SAVE_PATH
                        The path to save model output, DEFAULT "./output"
  --model_name {audioldm-s-full,audioldm-l-full,audioldm-s-full-v2}
                        The checkpoint you gonna use, DEFAULT "audioldm-s-full"
  -ckpt CKPT_PATH, --ckpt_path CKPT_PATH
                        (deprecated) The path to the pretrained .ckpt model, DEFAULT None
  -b BATCHSIZE, --batchsize BATCHSIZE
                        Generate how many samples at the same time, DEFAULT 1
  --ddim_steps DDIM_STEPS
                        The sampling step for DDIM, DEFAULT 200
  -gs GUIDANCE_SCALE, --guidance_scale GUIDANCE_SCALE
                        Guidance scale (Large => better quality and relavancy to text; Small => better diversity), DEFAULT 2.5
  -dur DURATION, --duration DURATION
                        The duration of the samples, DEFAULT 10
  -n N_CANDIDATE_GEN_PER_TEXT, --n_candidate_gen_per_text N_CANDIDATE_GEN_PER_TEXT
                        Automatic quality control. This number control the number of candidates (e.g., generate three audios and choose the best to show you). A Larger value usually lead to better quality with heavier computation, DEFAULT 3
  --seed SEED           Change this value (any integer number) will lead to a different generation result. DEFAULT 42

For the evaluation of audio generative model, please refer to audioldm_eval.

Hugging Face 🧨 Diffusers

AudioLDM is available in the Hugging Face 🧨 Diffusers library from v0.15.0 onwards. The official checkpoints can be found on the Hugging Face Hub, alongside documentation and examples scripts.

To install Diffusers and Transformers, run:

pip install --upgrade diffusers transformers

You can then load pre-trained weights into the AudioLDM pipeline and generate text-conditional audio outputs:

from diffusers import AudioLDMPipeline
import torch

repo_id = "cvssp/audioldm-s-full-v2"
pipe = AudioLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "Techno music with a strong, upbeat tempo and high melodic riffs"
audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0]

Web Demo

Integrated into Hugging Face Spaces 🤗 using Gradio. Try out the Web Demo Hugging Face Spaces

TuneFlow Demo

Try out AudioLDM as a TuneFlow plugin TuneFlow x AudioLDM. See how it can work in a real DAW (Digital Audio Workstation).

TODO

"Buy Me A Coffee"

  • Update the checkpoint with more training steps.
  • Update the checkpoint with more parameters (audioldm-l).
  • Add AudioCaps finetuned AudioLDM-S model
  • Build pip installable package for commandline use
  • Build Gradio web application
  • Add super-resolution, inpainting into Gradio web application
  • Add style-transfer into Gradio web application
  • Add text-guided style transfer
  • Add audio-to-audio generation
  • Add audio super-resolution
  • Add audio inpainting

Cite this work

If you found this tool useful, please consider citing

@article{liu2023audioldm,
  title={{AudioLDM}: Text-to-Audio Generation with Latent Diffusion Models},
  author={Liu, Haohe and Chen, Zehua and Yuan, Yi and Mei, Xinhao and Liu, Xubo and Mandic, Danilo and Wang, Wenwu and Plumbley, Mark D},
  journal={Proceedings of the International Conference on Machine Learning},
  year={2023}
  pages={21450-21474}
}

Hardware requirement

  • GPU with 8GB of dedicated VRAM
  • A system with a 64-bit operating system (Windows 7, 8.1 or 10, Ubuntu 16.04 or later, or macOS 10.13 or later) 16GB or more of system RAM

Reference

Part of the code is borrowed from the following repos. We would like to thank the authors of these repos for their contribution.

https://github.com/LAION-AI/CLAP

https://github.com/CompVis/stable-diffusion

https://github.com/v-iashin/SpecVQGAN

https://github.com/toshas/torch-fidelity

We build the model with data from AudioSet, Freesound and BBC Sound Effect library. We share this demo based on the UK copyright exception of data for academic research.

audioldm's People

Contributors

andantei avatar bradgrimm avatar eltociear avatar fcolecumberri avatar haoheliu avatar harushii18 avatar hysts avatar jagilley avatar littleflyingsheep avatar mederka avatar olaviinha avatar sanchit-gandhi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

audioldm's Issues

Duration of LDM pre-training

Hi,

Great work! I noticed that AudioLDM-L was pre-trained for 0.6M steps on the Audiocaps dataset. You used a batch size of 5/8 and a single RTX 3090. Thus I am wondering how long it took to train the model.

Thanks

Torch not compiled with CUDA enabled

I am on a 2019 mac pro (AMD) and am getting the error:
Torch not compiled with CUDA enabled

Is there a way to make AudioDLM work on my machine?

Related question:
I am getting the same error on my M1 Macbook air. Can this be used on the Macbook?
I think Pytorch just added meal support but I am not sure if this helps here?

Thank you!

Training the autoencoder

Hi authors,

Wonderful work! Could you please share the scripts to train the autoencoder (VAE)? Thanks a lot.

Fine-tune model

Would it be able to fine-tune the model with folder of own sounds like e.g. kick samples?

Stuck at "downloading the main structure of audioldm"

Following the instructions for running app.py through terminal, after I run "python3 app.py" I get:
Downloading the main structure of audioldm

And then it gets stuck. I tried running "rm ~/.cache/audioldm/audioldm-s-full.ckpt" and using pip and python instead of pip3 and python3 just in case but it didn't help.

Let's build a community around it

Hey, I just created a Discord server where we can connect together to discuss our research, sounds and ideas using AudioLDM.

https://discord.gg/seHAVq5U

If Admin appprove - maybe lets add it to project's description.
I can lead the server and assign moderators if needed.

OSError: [WinError 127] The specified procedure could not be found

when running the audio to audio script I get this error
full log:

Traceback (most recent call last):
  File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\runpy.py", line 187, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\runpy.py", line 146, in _get_module_details
    return _get_module_details(pkg_main_name, error)
  File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\runpy.py", line 110, in _get_module_details
    __import__(pkg_name)
  File "E:\AudioLDM\AudioLDM\audioldm\__init__.py", line 3, in <module>
    from .pipeline import *
  File "E:\AudioLDM\AudioLDM\audioldm\pipeline.py", line 11, in <module>
    from audioldm.audio import wav_to_fbank, TacotronSTFT, read_wav_file
  File "E:\AudioLDM\AudioLDM\audioldm\audio\__init__.py", line 1, in <module>
    from .tools import wav_to_fbank, read_wav_file
  File "E:\AudioLDM\AudioLDM\audioldm\audio\tools.py", line 3, in <module>
    import torchaudio
  File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torchaudio\__init__.py", line 1, in <module>
    from torchaudio import _extension  # noqa: F401
  File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torchaudio\_extension.py", line 67, in <module>
    _init_extension()
  File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torchaudio\_extension.py", line 61, in _init_extension
    _load_lib("libtorchaudio")
  File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torchaudio\_extension.py", line 51, in _load_lib
    torch.ops.load_library(path)
  File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torch\_ops.py", line 573, in load_library
    ctypes.CDLL(path)
  File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\ctypes\__init__.py", line 374, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: [WinError 127] The specified procedure could not be found

(audioldm) E:\AudioLDM\AudioLDM>audioldm --file_path E:\AudioLDM\AudioLDM\inputs\escapism.mp3
Traceback (most recent call last):
  File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\runpy.py", line 187, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\runpy.py", line 146, in _get_module_details
    return _get_module_details(pkg_main_name, error)
  File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\runpy.py", line 110, in _get_module_details
    __import__(pkg_name)
  File "E:\AudioLDM\AudioLDM\audioldm\__init__.py", line 3, in <module>
    from .pipeline import *
  File "E:\AudioLDM\AudioLDM\audioldm\pipeline.py", line 11, in <module>
    from audioldm.audio import wav_to_fbank, TacotronSTFT, read_wav_file
  File "E:\AudioLDM\AudioLDM\audioldm\audio\__init__.py", line 1, in <module>
    from .tools import wav_to_fbank, read_wav_file
  File "E:\AudioLDM\AudioLDM\audioldm\audio\tools.py", line 3, in <module>
    import torchaudio
  File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torchaudio\__init__.py", line 1, in <module>
    from torchaudio import _extension  # noqa: F401
  File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torchaudio\_extension.py", line 67, in <module>
    _init_extension()
  File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torchaudio\_extension.py", line 61, in _init_extension
    _load_lib("libtorchaudio")
  File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torchaudio\_extension.py", line 51, in _load_lib
    torch.ops.load_library(path)
  File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\site-packages\torch\_ops.py", line 573, in load_library
    ctypes.CDLL(path)
  File "C:\Users\FoxFl\anaconda3\envs\audioldm\lib\ctypes\__init__.py", line 374, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: [WinError 127] The specified procedure could not be found

pip package doesn't work

It seems like the package is missing files, or at least I think. command prompt does not work as it is.

command:
python -m audioldm

error I got:
No module named audioldm.main; 'audioldm' is a package and cannot be directly executed

everything is installed. when something is missing, it does show errors, however when it should run, it doesn't

Generating more than 10 seconds, with inpainting

Hello,
I have been experimenting with > 10 seconds generation via infilling; %50 past audio (5 seconds) %50 blank audio (5 seconds). What I saw so far was;

  1. the infilling audio is significantly higher amplitude (that could be fixed, not a big issue)
  2. the infilling "music" is not coherent; when used for music generation, the output is very faded at the beginning and end of the masked region, only at the middle it resembles a normal gain (normal gain compared to itself - there's always a big amplitude difference wrt original audio)

Is there a way to improve this task, extending music generation by infilling?

instructions did not work

I managed to install the web app from hugging face, but I want the terminal for more options and I followed the instruction but in the end it said that audioldm is a package not a module, any help?

Cant run app.py

hey all trying to run the app.py with the readme directions and am getting back out the following error on ubuntu 23.04
``
DiffusionWrapper has 185.04 M params.
/home/jerrick/anaconda3/envs/audioldm/lib/python3.8/site-packages/torchlibrosa/stft.py:193: FutureWarning: Pass size=1024 as keyword args. From version 0.10 passing these as positional arguments will result in an error
fft_window = librosa.util.pad_center(fft_window, n_fft)
/home/jerrick/anaconda3/envs/audioldm/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3190.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Some weights of the model checkpoint at roberta-base were not used when initializing RobertaModel: ['lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.decoder.weight']

  • This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
    Traceback (most recent call last):
    File "app.py", line 8, in
    audioldm = build_model()
    File "/home/jerrick/AudioLDM/audioldm/pipeline.py", line 56, in build_model
    checkpoint = torch.load(resume_from_checkpoint, map_location=device)
    File "/home/jerrick/anaconda3/envs/audioldm/lib/python3.8/site-packages/torch/serialization.py", line 777, in load
    with _open_zipfile_reader(opened_file) as opened_zipfile:
    File "/home/jerrick/anaconda3/envs/audioldm/lib/python3.8/site-packages/torch/serialization.py", line 282, in init
    super(_open_zipfile_reader, self).init(torch._C.PyTorchFileReader(name_or_buffer))
    RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
    ``

Hugging Face needs fixing

bandicam 2023-03-22 08-40-34-042

So as you see here, every time I try to generate a song on the Hugging Face space, the queue has me in around 296th in line, and it's taking about 3880 tokens to generate, are you guys gonna work on a solution?

doesn't unload VRAM after operation [5.5 gb occupied after 1st run]

i can just restart everything by interrupting then re -> app.py , it's okay. but i mean . . . kinda sucks . It occupies 5.4 then doubble that exeeds 8 gigs.
interrupting the cycle ofc is gonna reset it . Maybe add automatic purge ? idk .

running locally - 8 gb ram | 7.8 max available - tested large and small models

train my own model

hello mates , first of all thanks alot for this awesome tool,i was playing with it and the results are awesome.
is there any way for me to train my own model based on my dataset i read that this was inspired by stable diffusion so i think its possible to train my own dataset and make different models

AudioLDM train code

Dear authors,

I'm writing to inquire about the availability of Audioldm training code on the Github repository.
I couldn't find the code there, and I'm wondering if there are any plans to provide it.

Thank you very much for releasing your code.

No module named 'audioldm'

I did everything according to the instructions, but I still ran into this problem.

my system is windows 11

then ran the command:
python3 scripts/text2sound.py -t "A hammer is hitting a wooden surface"

the error i got:
Traceback (most recent call last):
File "path-to-directory\scripts\text2sound.py", line 2, in
from audioldm import text_to_audio, build_model, save_wave
ModuleNotFoundError: No module named 'audioldm'

errors after trying to run app.py

Traceback (most recent call last):
File "X:\AudioLDM-Neu\AudioLDM-Neu-venv\lib\runpy.py", line 185, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "X:\AudioLDM-Neu\AudioLDM-Neu-venv\lib\runpy.py", line 111, in _get_module_details
import(pkg_name)
File "X:\AudioLDM-Neu\app.py", line 9, in
audioldm = build_model()
File "X:\AudioLDM-Neu\audioldm\pipeline.py", line 64, in build_model
checkpoint = torch.load(resume_from_checkpoint, map_location=device)
File "X:\AudioLDM-Neu\AudioLDM-Neu-venv\lib\site-packages\torch\serialization.py", line 777, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
File "X:\AudioLDM-Neu\AudioLDM-Neu-venv\lib\site-packages\torch\serialization.py", line 282, in init
super(_open_zipfile_reader, self).init(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

I'd appreciate any help

I have installed pytorch for cuda, and have a checkpoint in the ckpt folder, removing the checkpoint didn't change anything

duration bigger or equal to 20 results in weird noise

Hi! Thanks for this, this is amazing to play around with :)

I'm trying to use it for a film to transform music via an algorithm to create a transition from acoustic to artificial.
When I'm using --duration 20, or more, it results in purely noise.

I'm running it locally on an M1 Macbook Pro.

I've also manually changed to use the CPU as it otherwise won't run on this machine, I think.

Anyone got an idea what to do?
Thank you!

"CUDA out of memory" when generating longer duration audio. Any way to fix?

I'm an enthusiast trying to see what is possible with this model and I can generate 30 second pieces on my RTX 3060 just fine but when I try 60 seconds I get this:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 6.75 GiB (GPU 0; 12.00 GiB total capacity; 10.34 GiB already allocated; 0 bytes free; 10.69 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Is it possible to generate longer audio without buying a new GPU?

Looping an existing sound (possibly out of scope)

Hi.

Played a bit with the model (both -s models fit into 8GB VRAM, very nice) and tried my luck with SD-derived prompting skills.
Very impressive. Thank you very much for giving it to us.

I've got a weird request, I guess, or asking for guidance how to accomplish it.
I noticed that transfer with transfer strength 0 is okay-ish close to the original, despite it being decomposed to the latent space and then assembled back.

Is it possible to make or where should I start digging on my own to make a mode that will attempt to seamlessly loop an existing SFX snippet?

Making a seamless loop is usually manual work.
I'm aware only of nvk_LOOPMAKER plugin for Reaper that semi-automates the process and I kinda want to make a batch of them and go drink some tea.

Basically I just want to select a bunch of sounds from a soundbank (all licensed for commercial use, obviously), run a batch file, go drink some tea and return to processed loops that are close to the original and are looped seamlessly.

Maybe, hopefully, there's a magic prompt that will do it for me already? Fat chance, but you never know.

Thank you.

Higher bit-depth model

I know upscaling is going to be released on Friday (which is OMG), but is it possible to retrain the model with 44.1? Is it feasible? Would a typical GPU run that? I think having a richer depth even in generation might give much clearer and crispier generations.

Torch not compiled with CUDA (Windows support?)

I followed the instructions but kept getting complaints that "Torch not compiled with CUDA enabled".

I tried switching Torch to the CU variants, but that only resulting in cascading complaints regarding missing DLLs.

Has this repository been tested in a windows environment? (Or am I on a fool's errand?)


    return func(*args, **kwargs)
  File "G:\Audio\audioldm-text-to-audio-generation\audioldm\latent_diffusion\ddim.py", line 127, in sample
    self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)
  File "G:\Audio\audioldm-text-to-audio-generation\audioldm\latent_diffusion\ddim.py", line 43, in make_schedule
    self.register_buffer("betas", to_torch(self.model.betas))
  File "G:\Audio\audioldm-text-to-audio-generation\audioldm\latent_diffusion\ddim.py", line 25, in register_buffer
    attr = attr.to(torch.device("cuda"))
  File "C:\Users\RandomName\anaconda3\envs\audioldm\lib\site-packages\torch\cuda\__init__.py", line 221, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

cuda() requirement on apple silicon (m1, m2) macs

On apple silicon macs cuda() isn't working. I tried replacing the torch.device("cuda") in the ddmi.py but there's more errors depending on what you're doing.

Is there a way you could provide an pip3 package that accounts for m1 macs and just uses the CPU? I tried the audio transfer with torch.device("cpu") and it does work.

Thank you!

multichannel / stereo

I think I've heard some examples in stereo?
Is this possible using the CLI version?

CUDA out of memory with default setup [Windows]

I'm running into this error when trying to run audioldm -t "A hammer is hitting a wooden surface".

My setup is Windows, python 3.9, creating a virtual environment and installing with pip install audioldm.
I had to uninstall torch and re-install using pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
after getting the error AssertionError: Torch not compiled with CUDA enabled.

I tried clearing the cache of cuda using a script (after googling this issue) with no luck.
I don't really know where to go from here? (New world to me)

Here's the full error:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 6.00 GiB total capacity; 5.35 GiB already allocated; 0 bytes free; 5.39 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Any help appreciated!
And also amazing work, AudioLDM is insanely cool.

Google Collab NB is not working

`
/root/.cache/audiol 100%[===================>] 2.38G 69.4MB/s in 4m 15s

2023-03-18 15:21:18 (9.58 MB/s) - ‘/root/.cache/audioldm/audioldm-full-s-v2.ckpt’ saved [2559017383/2559017383]


SameFileError Traceback (most recent call last)
in
216 op(c.warn, 'Downloading', use_ckpt)
217 get_ipython().system('wget {ckpt_url} -O {models_dir}{use_ckpt}')
--> 218 shutil.copy(models_dir+use_ckpt, use_ckpt_path+use_ckpt)
219 op(c.ok, 'Done.')
220

1 frames
/usr/lib/python3.9/shutil.py in copyfile(src, dst, follow_symlinks)
242
243 if _samefile(src, dst):
--> 244 raise SameFileError("{!r} and {!r} are the same file".format(src, dst))
245
246 file_size = 0

SameFileError: '/root/.cache/audioldm/audioldm-full-s-v2.ckpt' and '/root/.cache/audioldm/audioldm-full-s-v2.ckpt' are the same file


NameError Traceback (most recent call last)
in
93 if action == 'generate':
94 file_out = dir_out+uniq_id+''+slug(input)[:60]+'_'+str(i).zfill(3)+'.wav'
---> 95 generated_audio = text2audio(input, duration, None, guidance_scale, seed, candidates, ddim_steps)
96 elif action == 'audio2audio':
97 file_out = dir_out+uniq_id+'
'+basename(init_path)+'_'+str(i).zfill(3)+'.wav'

NameError: name 'text2audio' is not defined
`

Windows install issue

I ran the install instructions on Windows 10, - no complaints or errors during the installation - but got the following error message when I turn to run the text prompt example. Any pointers what I should try/fix?

Capture

Google Colab

Hugging Face web demo at this moment is failing to output the result as a video file. Are there any plans to add Google Colab notebook in the repository?

STFT Parameters

I tried to reproduce the audio spectrogram processing settings in the paper.
Is this the correct one?
Screen Shot 2023-02-09 at 12 31 51 AM

AudioLDM-L/-Full weights?

Hello, I was reading the paper and noticed a "superior" version of the model, AudioLDM-L
are the weights of this version going to be released?

Also, I registered the "audioldm" org on hf, so just let me know if you want it so I can pass it to you

CUFFT_INTERNAL_ERROR

When attempting to run app.py, an error is returned.
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined] RuntimeError: cuFFT error: CUFFT_INTERNAL_ERROR

Could not load library libcudnn_cnn_infer.so.8. Error: libcuda.so: cannot open shared object file: No such file or directory

$ python3 app.py
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Load AudioLDM: %s audioldm-s-full
DiffusionWrapper has 185.04 M params.
/home/teamy/miniconda3/envs/audioldm/lib/python3.8/site-packages/torchlibrosa/stft.py:193: FutureWarning: Pass size=1024 as keyword args. From version 0.10 passing these as positional arguments will result in an error
  fft_window = librosa.util.pad_center(fft_window, n_fft)
/home/teamy/miniconda3/envs/audioldm/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
Some weights of the model checkpoint at roberta-base were not used when initializing RobertaModel: ['lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'lm_head.bias', 'lm_head.layer_norm.weight', 'lm_head.dense.bias', 'lm_head.dense.weight']
- This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Generate audio using text A hammer is hitting a wooden surface
Could not load library libcudnn_cnn_infer.so.8. Error: libcuda.so: cannot open shared object file: No such file or directory
Aborted

Running under wsl Ubuntu 22.04.2 LTS D:

TypeError: Invalid file: 45.0

(i think i posted this somewhere else too, sorry if its not related to that place)
Hello, when running app.py, i get a large error, not sure what to do.
error:

FutureWarning: Pass size=1024 as keyword args. From version 0.10 passing these as positional arguments will result in an error
fft_window = librosa.util.pad_center(fft_window, n_fft)
C:\Users\anedi\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\TensorShape.cpp:3191.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Some weights of the model checkpoint at roberta-base were not used when initializing RobertaModel: ['lm_head.dense.bias', 'lm_head.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias']

  • This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
    Some weights of RobertaModel were not initialized from the model checkpoint at roberta-base and are newly initialized: ['roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']
    You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
    Caching examples at: 'C:\Users\anedi\Desktop\audioldm\AudioLDM\gradio_cached_examples\15\log.csv'
    Traceback (most recent call last):
    File "c:/Users/anedi/Desktop/audioldm/AudioLDM/app.py", line 286, in
    gr.Examples(
    File "C:\Users\anedi\AppData\Local\Programs\Python\Python38\lib\site-packages\gradio\helpers.py", line 69, in create_examples
    utils.synchronize_async(examples_obj.create)
    File "C:\Users\anedi\AppData\Local\Programs\Python\Python38\lib\site-packages\gradio\utils.py", line 377, in synchronize_async
    return fsspec.asyn.sync(fsspec.asyn.get_loop(), func, *args, **kwargs)
    File "C:\Users\anedi\AppData\Local\Programs\Python\Python38\lib\site-packages\fsspec\asyn.py", line 99, in sync
    raise return_result
    File "C:\Users\anedi\AppData\Local\Programs\Python\Python38\lib\site-packages\fsspec\asyn.py", line 54, in runner
    result[0] = await coro
    File "C:\Users\anedi\AppData\Local\Programs\Python\Python38\lib\site-packages\gradio\helpers.py", line 273, in create
    await self.cache()
    File "C:\Users\anedi\AppData\Local\Programs\Python\Python38\lib\site-packages\gradio\helpers.py", line 308, in cache
    prediction = await Context.root_block.process_api(
    File "C:\Users\anedi\AppData\Local\Programs\Python\Python38\lib\site-packages\gradio\blocks.py", line 1015, in process_api
    result = await self.call_function(
    File "C:\Users\anedi\AppData\Local\Programs\Python\Python38\lib\site-packages\gradio\blocks.py", line 833, in call_function
    prediction = await anyio.to_thread.run_sync(
    File "C:\Users\anedi\AppData\Local\Programs\Python\Python38\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
    File "C:\Users\anedi\AppData\Local\Programs\Python\Python38\lib\site-packages\anyio_backends_asyncio.py", line 937,
    in run_sync_in_worker_thread
    return await future
    File "C:\Users\anedi\AppData\Local\Programs\Python\Python38\lib\site-packages\anyio_backends_asyncio.py", line 867,
    in run
    result = context.run(func, *args)
    File "c:/Users/anedi/Desktop/audioldm/AudioLDM/app.py", line 30, in text2audio
    waveform = text_to_audio(
    File "c:\Users\anedi\Desktop\audioldm\AudioLDM\audioldm\pipeline.py", line 115, in text_to_audio
    waveform = read_wav_file(original_audio_file_path, int(duration * 102.4) * 160)
    File "c:\Users\anedi\Desktop\audioldm\AudioLDM\audioldm\audio\tools.py", line 55, in read_wav_file
    waveform, sr = torchaudio.load(filename) # Faster!!!
    File "C:\Users\anedi\AppData\Local\Programs\Python\Python38\lib\site-packages\torchaudio\backend\soundfile_backend.py", line 205, in load
    with soundfile.SoundFile(filepath, "r") as file
    :
    File "C:\Users\anedi\AppData\Local\Programs\Python\Python38\lib\site-packages\soundfile.py", line 655, in init
    self._file = self._open(file, mode_int, closefd)
    File "C:\Users\anedi\AppData\Local\Programs\Python\Python38\lib\site-packages\soundfile.py", line 1209, in _open
    raise TypeError("Invalid file: {0!r}".format(self.name))
    TypeError: Invalid file: 45.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.