Git Product home page Git Product logo

jukemir's Issues

RuntimeError: CuDNN Error: CUDNN_STATUS_MAPPING_ERROR

When I ran the docker, I first got the found no nvidia driver error as issue. After installing nvidia-container, the problem seemed solved.

Then I tried again the following command. Since I have 2 cards on the machine, only card 0 is assigned.
sudo docker run -it --rm --gpus='"device=0"' -v xxx:/input -v xxx:/output --entrypoint bash jukemir/representations_jukebox
And then,
python main.py --batch_size 8

After a few minutes (of initializing I guess), I got the following error:
Traceback (most recent call last):
File "main.py", line 177, in
representation = get_acts_from_file(input_path, hps, vqvae, top_prior, meanpool=True)
File "main.py", line 86, in get_acts_from_file
z = get_z(audio, vqvae)
File "main.py", line 27, in get_z
zs = vqvae.encode(torch.cuda.FloatTensor(audio[np.newaxis, :, np.newaxis]))
File "/code/jukebox/jukebox/vqvae/vqvae.py", line 141, in encode
zs_i = self._encode(x_i, start_level=start_level, end_level=end_level)
File "/code/jukebox/jukebox/vqvae/vqvae.py", line 132, in _encode
x_out = encoder(x_in)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/code/jukebox/jukebox/vqvae/encdec.py", line 80, in forward
x = level_block(x)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/code/jukebox/jukebox/vqvae/encdec.py", line 26, in forward
return self.model(x)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 100, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 100, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 202, in forward
self.padding, self.dilation, self.groups)
RuntimeError: cuDNN error: CUDNN_STATUS_MAPPING_ERROR

I googled it and added torch.backends.cudnn.enabled = False to main.py but a new problem occurred:
Traceback (most recent call last):
File "main.py", line 179, in
representation = get_acts_from_file(input_path, hps, vqvae, top_prior, meanpool=True)
File "main.py", line 88, in get_acts_from_file
z = get_z(audio, vqvae)
File "main.py", line 29, in get_z
zs = vqvae.encode(torch.cuda.FloatTensor(audio[np.newaxis, :, np.newaxis]))
File "/code/jukebox/jukebox/vqvae/vqvae.py", line 141, in encode
zs_i = self._encode(x_i, start_level=start_level, end_level=end_level)
File "/code/jukebox/jukebox/vqvae/vqvae.py", line 132, in _encode
x_out = encoder(x_in)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/code/jukebox/jukebox/vqvae/encdec.py", line 80, in forward
x = level_block(x)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/code/jukebox/jukebox/vqvae/encdec.py", line 26, in forward
return self.model(x)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 100, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 100, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 202, in forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)

Did I miss anything?

Errors when running collab notebook

Hello,
I am experiencing a series of errors when trying to run the collab notebook provided with the code.

First, installing jukebox throws the following error:

× python setup.py bdist_wheel did not run successfully.
  │ exit code: 1
  ╰─> See above for output.

However, this can be solved by installing a different version of jukebox:

!pip install --upgrade git+https://github.com/craftmine1000/jukebox-saveopt.git

However, then in the initialization block, the following errors arise:

/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:2025: UserWarning: for encoders.0.level_blocks.0.model.0.0.weight: copying from a non-meta parameter in the checkpoint to a meta parameter in the current model, which is a no-op. (Did you mean to pass `assign=True` to assign items in the state dictionary to their corresponding key in the module instead of copying them in place?)
  warnings.warn(f'for {key}: copying from a non-meta parameter in the checkpoint to a meta '
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:2025: UserWarning: for encoders.0.level_blocks.0.model.0.0.bias: copying from a non-meta parameter in the checkpoint to a meta parameter in the current model, which is a no-op. (Did you mean to pass `assign=True` to assign items in the state dictionary to their corresponding key in the module instead of copying them in place?)
  warnings.warn(f'for {key}: copying from a non-meta parameter in the checkpoint to a meta '
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:2025: UserWarning: for encoders.0.level_blocks.0.model.0.1.model.0.model.1.weight: copying from a non-meta parameter in the checkpoint to a meta parameter in the current model, which is a no-op. (Did you mean to pass `assign=True` to assign items in the state dictionary to their corresponding key in the module instead of copying them in place?)
  warnings.warn(f'for {key}: copying from a non-meta parameter in the checkpoint to a meta '
...
``` in the line: `top_prior = make_prior(hparams, vqvae, device)#device)`

Please advise on how to resolve. Thanks in advance

ValueError: Audio file is not long enough

restore jukebox/models/5b/prior_level_2.pth.tar
Restored from jukebox/models/5b/prior_level_2.pth.tar
0%| | 0/60 [00:35<?, ?it/s]
Traceback (most recent call last):
File "main.py", line 177, in
representation = get_acts_from_file(input_path, hps, vqvae, top_prior, meanpool=True)
File "main.py", line 86, in get_acts_from_file
z = get_z(audio, vqvae)
File "main.py", line 32, in get_z
raise ValueError('Audio file is not long enough')
ValueError: Audio file is not long enough

Model selection for extracting jukebox representations

Hello,

I would like to ask a question regarding the language model selection in the main script for extracting jukebox representations. I am referring to the main python script under jukemir/representations/jukebox.

There are two options for the language model, '5b' or '1b_lyrics'. However, when setting parameters there are a couple of if statements referring to a model '5b_lyrics'. Please see the below excerpt of the code from line 153 onwards:

# Set up VQVAE
model = "5b"  # or "1b_lyrics"
hps = Hyperparams()
hps.sr = 44100
hps.n_samples = 3 if model == "5b_lyrics" else 8
hps.name = "samples"
chunk_size = 16 if model == "5b_lyrics" else 32
max_batch_size = 3 if model == "5b_lyrics" else 16

Is there a choice between three different models or "5b" is identical to "5b_lyrics"? Which values did you use for n_samples, chunk_size and max_batch_size when using the pretrained 5B-parameter language model for extracting representations for the datasets you used in the paper?

Thank you!

AssertionError: Found no NVIDIA driver on your system

when i run the shell
docker run -it --rm -v xxx/video_trim_audio/:/input -v /xxx/jukemir/wav_jukebox/:/output 393fa1440720

I get the error

Traceback (most recent call last):
File "main.py", line 151, in
rank, local_rank, device = setup_dist_from_mpi()
File "/code/jukebox/jukebox/utils/dist_utils.py", line 46, in setup_dist_from_mpi
return _setup_dist_from_mpi(master_addr, backend, port, n_attempts, verbose)
File "/code/jukebox/jukebox/utils/dist_utils.py", line 93, in _setup_dist_from_mpi
torch.cuda.set_device(local_rank)
File "/usr/local/lib/python3.6/dist-packages/torch/cuda/init.py", line 292, in set_device
torch._C._cuda_setDevice(device)
File "/usr/local/lib/python3.6/dist-packages/torch/cuda/init.py", line 196, in _lazy_init
_check_driver()
File "/usr/local/lib/python3.6/dist-packages/torch/cuda/init.py", line 101, in _check_driver
http://www.nvidia.com/Download/index.aspx""")
AssertionError:
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
http://www.nvidia.com/Download/index.aspx

RuntimeError: Error(s) in loading state_dict for SimplePrior

Hello,

Thank you for making your work public.
I am having an issue when trying to extract a jukebox representation using the model "5b".
My python script is identical to your main under /representations/jukebox where I am using a different dataset.

Please see the exact error below

0: Loading vqvae in eval mode
Loading artist IDs from /data/home/acw512/musicnet_vgg_multitask/lib/python3.8/site-packages/jukebox/data/ids/v2_artist_ids.txt
Loading artist IDs from /data/home/acw512/musicnet_vgg_multitask/lib/python3.8/site-packages/jukebox/data/ids/v2_genre_ids.txt
Level:2, Cond downsample:None, Raw to tokens:128, Sample length:1048576
0: Converting to fp16 params
Downloading from azure
Running  wget -O /data/home/acw512/.cache/jukebox/models/5b/prior_level_2.pth.tar https://openaipublic.azureedge.net/jukebox/models/5b/prior_level_2.pth.tar
Restored from /data/home/acw512/.cache/jukebox/models/5b/prior_level_2.pth.tar
Traceback (most recent call last):
  File "test_representation.py", line 139, in <module>
    top_prior = make_prior(hparams, vqvae, device)
  File "/data/home/acw512/musicnet_vgg_multitask/lib/python3.8/site-packages/jukebox/make_models.py", line 179, in make_prior
    restore_model(hps, prior, hps.restore_prior)
  File "/data/home/acw512/musicnet_vgg_multitask/lib/python3.8/site-packages/jukebox/make_models.py", line 61, in restore_model
    model.load_state_dict(checkpoint['model'])
  File "/data/home/acw512/musicnet_vgg_multitask/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1406, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for SimplePrior:
	Unexpected key(s) in state_dict: "prior.transformer._attn_mods.36.attn.c_attn.w", "prior.transformer._attn_mods.36.attn.c_attn.b", "prior.transformer._attn_mods.36.attn.c_proj.w", "prior.transformer._attn_mods.36.attn.c_proj.b", "prior.transformer._attn_mods.36.ln_0.weight", "prior.transformer._attn_mods.36.ln_0.bias", "prior.transformer._attn_mods.36.mlp.c_fc.w", "prior.transformer._attn_mods.36.mlp.c_fc.b", "prior.transformer._attn_mods.36.mlp.c_proj.w", "prior.transformer._attn_mods.36.mlp.c_proj.b", "prior.transformer._attn_mods.36.ln_1.weight", "prior.transformer._attn_mods.36.ln_1.bias", "prior.transformer._attn_mods.37.attn.c_attn.w", "prior.transformer._attn_mods.37.attn.c_attn.b", "prior.transformer._attn_mods.37.attn.c_proj.w", "prior.transformer._attn_mods.37.attn.c_proj.b", "prior.transformer._attn_mods.37.ln_0.weight", "prior.transformer._attn_mods.37.ln_0.bias", "prior.transformer._attn_mods.37.mlp.c_fc.w", "prior.transformer._attn_mods.37.mlp.c_fc.b", "prior.transformer._attn_mods.37.mlp.c_proj.w", "prior.transformer._attn_mods.37.mlp.c_proj.b", "prior.transformer._attn_mods.37.ln_1.weight", "prior.transformer._attn_mods.37.ln_1.bias", "prior.transformer._attn_mods.38.attn.c_attn.w", "prior.transformer._attn_mods.38.attn.c_attn.b", "prior.transformer._attn_mods.38.attn.c_proj.w", "prior.transformer._attn_mods.38.attn.c_proj.b", "prior.transformer._attn_mods.38.ln_0.weight", "prior.transformer._attn_mods.38.ln_0.bias", "prior.transformer._attn_mods.38.mlp.c_fc.w", "prior.transformer._attn_mods.38.mlp.c_fc.b", "prior.transformer._attn_mods.38.mlp.c_proj.w", "prior.transformer._attn_mods.38.mlp.c_proj.b", "prior.transformer._attn_mods.38.ln_1.weight", "prior.transformer._attn_mods.38.ln_1.bias", "prior.transformer._attn_mods.39.attn.c_attn.w", "prior.transformer._attn_mods.39.attn.c_attn.b", "prior.transformer._attn_mods.39.attn.c_proj.w", "prior.transformer._attn_mods.39.attn.c_proj.b", "prior.transformer._attn_mods.39.ln_0.weight", "prior.transformer._attn_mods.39.ln_0.bias", "prior.transformer._attn_mods.39.mlp.c_fc.w", "prior.transformer._attn_mods.39.mlp.c_fc.b", "prior.transformer._attn_mods.39.mlp.c_proj.w", "prior.transformer._attn_mods.39.mlp.c_proj.b", "prior.transformer._attn_mods.39.ln_1.weight", "prior.transformer._attn_mods.39.ln_1.bias", "prior.transformer._attn_mods.40.attn.c_attn.w", "prior.transformer._attn_mods.40.attn.c_attn.b", "prior.transformer._attn_mods.40.attn.c_proj.w", "prior.transformer._attn_mods.40.attn.c_proj.b", "prior.transformer._attn_mods.40.ln_0.weight", "prior.transformer._attn_mods.40.ln_0.bias", "prior.transformer._attn_mods.40.mlp.c_fc.w", "prior.transformer._attn_mods.40.mlp.c_fc.b", "prior.transformer._attn_mods.40.mlp.c_proj.w", "prior.transformer._attn_mods.40.mlp.c_proj.b", "prior.transformer._attn_mods.40.ln_1.weight", "prior.transformer._attn_mods.40.ln_1.bias", "prior.transformer._attn_mods.41.attn.c_attn.w", "prior.transformer._attn_mods.41.attn.c_attn.b", "prior.transformer._attn_mods.41.attn.c_proj.w", "prior.transformer._attn_mods.41.attn.c_proj.b", "prior.transformer._attn_mods.41.ln_0.weight", "prior.transformer._attn_mods.41.ln_0.bias", "prior.transformer._attn_mods.41.mlp.c_fc.w", "prior.transformer._attn_mods.41.mlp.c_fc.b", "prior.transformer._attn_mods.41.mlp.c_proj.w", "prior.transformer._attn_mods.41.mlp.c_proj.b", "prior.transformer._attn_mods.41.ln_1.weight", "prior.transformer._attn_mods.41.ln_1.bias", "prior.transformer._attn_mods.42.attn.c_attn.w", "prior.transformer._attn_mods.42.attn.c_attn.b", "prior.transformer._attn_mods.42.attn.c_proj.w", "prior.transformer._attn_mods.42.attn.c_proj.b", "prior.transformer._attn_mods.42.ln_0.weight", "prior.transformer._attn_mods.42.ln_0.bias", "prior.transformer._attn_mods.42.mlp.c_fc.w", "prior.transformer._attn_mods.42.mlp.c_fc.b", "prior.transformer._attn_mods.42.mlp.c_proj.w", "prior.transformer._attn_mods.42.mlp.c_proj.b", "prior.transformer._attn_mods.42.ln_1.weight", "prior.transformer._attn_mods.42.ln_1.bias", "prior.transformer._attn_mods.43.attn.c_attn.w", "prior.transformer._attn_mods.43.attn.c_attn.b", "prior.transformer._attn_mods.43.attn.c_proj.w", "prior.transformer._attn_mods.43.attn.c_proj.b", "prior.transformer._attn_mods.43.ln_0.weight", "prior.transformer._attn_mods.43.ln_0.bias", "prior.transformer._attn_mods.43.mlp.c_fc.w", "prior.transformer._attn_mods.43.mlp.c_fc.b", "prior.transformer._attn_mods.43.mlp.c_proj.w", "prior.transformer._attn_mods.43.mlp.c_proj.b", "prior.transformer._attn_mods.43.ln_1.weight", "prior.transformer._attn_mods.43.ln_1.bias", "prior.transformer._attn_mods.44.attn.c_attn.w", "prior.transformer._attn_mods.44.attn.c_attn.b", "prior.transformer._attn_mods.44.attn.c_proj.w", "prior.transformer._attn_mods.44.attn.c_proj.b", "prior.transformer._attn_mods.44.ln_0.weight", "prior.transformer._attn_mods.44.ln_0.bias", "prior.transformer._attn_mods.44.mlp.c_fc.w", "prior.transformer._attn_mods.44.mlp.c_fc.b", "prior.transformer._attn_mods.44.mlp.c_proj.w", "prior.transformer._attn_mods.44.mlp.c_proj.b", "prior.transformer._attn_mods.44.ln_1.weight", "prior.transformer._attn_mods.44.ln_1.bias", "prior.transformer._attn_mods.45.attn.c_attn.w", "prior.transformer._attn_mods.45.attn.c_attn.b", "prior.transformer._attn_mods.45.attn.c_proj.w", "prior.transformer._attn_mods.45.attn.c_proj.b", "prior.transformer._attn_mods.45.ln_0.weight", "prior.transformer._attn_mods.45.ln_0.bias", "prior.transformer._attn_mods.45.mlp.c_fc.w", "prior.transformer._attn_mods.45.mlp.c_fc.b", "prior.transformer._attn_mods.45.mlp.c_proj.w", "prior.transformer._attn_mods.45.mlp.c_proj.b", "prior.transformer._attn_mods.45.ln_1.weight", "prior.transformer._attn_mods.45.ln_1.bias", "prior.transformer._attn_mods.46.attn.c_attn.w", "prior.transformer._attn_mods.46.attn.c_attn.b", "prior.transformer._attn_mods.46.attn.c_proj.w", "prior.transformer._attn_mods.46.attn.c_proj.b", "prior.transformer._attn_mods.46.ln_0.weight", "prior.transformer._attn_mods.46.ln_0.bias", "prior.transformer._attn_mods.46.mlp.c_fc.w", "prior.transformer._attn_mods.46.mlp.c_fc.b", "prior.transformer._attn_mods.46.mlp.c_proj.w", "prior.transformer._attn_mods.46.mlp.c_proj.b", "prior.transformer._attn_mods.46.ln_1.weight", "prior.transformer._attn_mods.46.ln_1.bias", "prior.transformer._attn_mods.47.attn.c_attn.w", "prior.transformer._attn_mods.47.attn.c_attn.b", "prior.transformer._attn_mods.47.attn.c_proj.w", "prior.transformer._attn_mods.47.attn.c_proj.b", "prior.transformer._attn_mods.47.ln_0.weight", "prior.transformer._attn_mods.47.ln_0.bias", "prior.transformer._attn_mods.47.mlp.c_fc.w", "prior.transformer._attn_mods.47.mlp.c_fc.b", "prior.transformer._attn_mods.47.mlp.c_proj.w", "prior.transformer._attn_mods.47.mlp.c_proj.b", "prior.transformer._attn_mods.47.ln_1.weight", "prior.transformer._attn_mods.47.ln_1.bias", "prior.transformer._attn_mods.48.attn.c_attn.w", "prior.transformer._attn_mods.48.attn.c_attn.b", "prior.transformer._attn_mods.48.attn.c_proj.w", "prior.transformer._attn_mods.48.attn.c_proj.b", "prior.transformer._attn_mods.48.ln_0.weight", "prior.transformer._attn_mods.48.ln_0.bias", "prior.transformer._attn_mods.48.mlp.c_fc.w", "prior.transformer._attn_mods.48.mlp.c_fc.b", "prior.transformer._attn_mods.48.mlp.c_proj.w", "prior.transformer._attn_mods.48.mlp.c_proj.b", "prior.transformer._attn_mods.48.ln_1.weight", "prior.transformer._attn_mods.48.ln_1.bias", "prior.transformer._attn_mods.49.attn.c_attn.w", "prior.transformer._attn_mods.49.attn.c_attn.b", "prior.transformer._attn_mods.49.attn.c_proj.w", "prior.transformer._attn_mods.49.attn.c_proj.b", "prior.transformer._attn_mods.49.ln_0.weight", "prior.transformer._attn_mods.49.ln_0.bias", "prior.transformer._attn_mods.49.mlp.c_fc.w", "prior.transformer._attn_mods.49.mlp.c_fc.b", "prior.transformer._attn_mods.49.mlp.c_proj.w", "prior.transformer._attn_mods.49.mlp.c_proj.b", "prior.transformer._attn_mods.49.ln_1.weight", "prior.transformer._attn_mods.49.ln_1.bias", "prior.transformer._attn_mods.50.attn.c_attn.w", "prior.transformer._attn_mods.50.attn.c_attn.b", "prior.transformer._attn_mods.50.attn.c_proj.w", "prior.transformer._attn_mods.50.attn.c_proj.b", "prior.transformer._attn_mods.50.ln_0.weight", "prior.transformer._attn_mods.50.ln_0.bias", "prior.transformer._attn_mods.50.mlp.c_fc.w", "prior.transformer._attn_mods.50.mlp.c_fc.b", "prior.transformer._attn_mods.50.mlp.c_proj.w", "prior.transformer._attn_mods.50.mlp.c_proj.b", "prior.transformer._attn_mods.50.ln_1.weight", "prior.transformer._attn_mods.50.ln_1.bias", "prior.transformer._attn_mods.51.attn.c_attn.w", "prior.transformer._attn_mods.51.attn.c_attn.b", "prior.transformer._attn_mods.51.attn.c_proj.w", "prior.transformer._attn_mods.51.attn.c_proj.b", "prior.transformer._attn_mods.51.ln_0.weight", "prior.transformer._attn_mods.51.ln_0.bias", "prior.transformer._attn_mods.51.mlp.c_fc.w", "prior.transformer._attn_mods.51.mlp.c_fc.b", "prior.transformer._attn_mods.51.mlp.c_proj.w", "prior.transformer._attn_mods.51.mlp.c_proj.b", "prior.transformer._attn_mods.51.ln_1.weight", "prior.transformer._attn_mods.51.ln_1.bias", "prior.transformer._attn_mods.52.attn.c_attn.w", "prior.transformer._attn_mods.52.attn.c_attn.b", "prior.transformer._attn_mods.52.attn.c_proj.w", "prior.transformer._attn_mods.52.attn.c_proj.b", "prior.transformer._attn_mods.52.ln_0.weight", "prior.transformer._attn_mods.52.ln_0.bias", "prior.transformer._attn_mods.52.mlp.c_fc.w", "prior.transformer._attn_mods.52.mlp.c_fc.b", "prior.transformer._attn_mods.52.mlp.c_proj.w", "prior.transformer._attn_mods.52.mlp.c_proj.b", "prior.transformer._attn_mods.52.ln_1.weight", "prior.transformer._attn_mods.52.ln_1.bias", "prior.transformer._attn_mods.53.attn.c_attn.w", "prior.transformer._attn_mods.53.attn.c_attn.b", "prior.transformer._attn_mods.53.attn.c_proj.w", "prior.transformer._attn_mods.53.attn.c_proj.b", "prior.transformer._attn_mods.53.ln_0.weight", "prior.transformer._attn_mods.53.ln_0.bias", "prior.transformer._attn_mods.53.mlp.c_fc.w", "prior.transformer._attn_mods.53.mlp.c_fc.b", "prior.transformer._attn_mods.53.mlp.c_proj.w", "prior.transformer._attn_mods.53.mlp.c_proj.b", "prior.transformer._attn_mods.53.ln_1.weight", "prior.transformer._attn_mods.53.ln_1.bias", "prior.transformer._attn_mods.54.attn.c_attn.w", "prior.transformer._attn_mods.54.attn.c_attn.b", "prior.transformer._attn_mods.54.attn.c_proj.w", "prior.transformer._attn_mods.54.attn.c_proj.b", "prior.transformer._attn_mods.54.ln_0.weight", "prior.transformer._attn_mods.54.ln_0.bias", "prior.transformer._attn_mods.54.mlp.c_fc.w", "prior.transformer._attn_mods.54.mlp.c_fc.b", "prior.transformer._attn_mods.54.mlp.c_proj.w", "prior.transformer._attn_mods.54.mlp.c_proj.b", "prior.transformer._attn_mods.54.ln_1.weight", "prior.transformer._attn_mods.54.ln_1.bias", "prior.transformer._attn_mods.55.attn.c_attn.w", "prior.transformer._attn_mods.55.attn.c_attn.b", "prior.transformer._attn_mods.55.attn.c_proj.w", "prior.transformer._attn_mods.55.attn.c_proj.b", "prior.transformer._attn_mods.55.ln_0.weight", "prior.transformer._attn_mods.55.ln_0.bias", "prior.transformer._attn_mods.55.mlp.c_fc.w", "prior.transformer._attn_mods.55.mlp.c_fc.b", "prior.transformer._attn_mods.55.mlp.c_proj.w", "prior.transformer._attn_mods.55.mlp.c_proj.b", "prior.transformer._attn_mods.55.ln_1.weight", "prior.transformer._attn_mods.55.ln_1.bias", "prior.transformer._attn_mods.56.attn.c_attn.w", "prior.transformer._attn_mods.56.attn.c_attn.b", "prior.transformer._attn_mods.56.attn.c_proj.w", "prior.transformer._attn_mods.56.attn.c_proj.b", "prior.transformer._attn_mods.56.ln_0.weight", "prior.transformer._attn_mods.56.ln_0.bias", "prior.transformer._attn_mods.56.mlp.c_fc.w", "prior.transformer._attn_mods.56.mlp.c_fc.b", "prior.transformer._attn_mods.56.mlp.c_proj.w", "prior.transformer._attn_mods.56.mlp.c_proj.b", "prior.transformer._attn_mods.56.ln_1.weight", "prior.transformer._attn_mods.56.ln_1.bias", "prior.transformer._attn_mods.57.attn.c_attn.w", "prior.transformer._attn_mods.57.attn.c_attn.b", "prior.transformer._attn_mods.57.attn.c_proj.w", "prior.transformer._attn_mods.57.attn.c_proj.b", "prior.transformer._attn_mods.57.ln_0.weight", "prior.transformer._attn_mods.57.ln_0.bias", "prior.transformer._attn_mods.57.mlp.c_fc.w", "prior.transformer._attn_mods.57.mlp.c_fc.b", "prior.transformer._attn_mods.57.mlp.c_proj.w", "prior.transformer._attn_mods.57.mlp.c_proj.b", "prior.transformer._attn_mods.57.ln_1.weight", "prior.transformer._attn_mods.57.ln_1.bias", "prior.transformer._attn_mods.58.attn.c_attn.w", "prior.transformer._attn_mods.58.attn.c_attn.b", "prior.transformer._attn_mods.58.attn.c_proj.w", "prior.transformer._attn_mods.58.attn.c_proj.b", "prior.transformer._attn_mods.58.ln_0.weight", "prior.transformer._attn_mods.58.ln_0.bias", "prior.transformer._attn_mods.58.mlp.c_fc.w", "prior.transformer._attn_mods.58.mlp.c_fc.b", "prior.transformer._attn_mods.58.mlp.c_proj.w", "prior.transformer._attn_mods.58.mlp.c_proj.b", "prior.transformer._attn_mods.58.ln_1.weight", "prior.transformer._attn_mods.58.ln_1.bias", "prior.transformer._attn_mods.59.attn.c_attn.w", "prior.transformer._attn_mods.59.attn.c_attn.b", "prior.transformer._attn_mods.59.attn.c_proj.w", "prior.transformer._attn_mods.59.attn.c_proj.b", "prior.transformer._attn_mods.59.ln_0.weight", "prior.transformer._attn_mods.59.ln_0.bias", "prior.transformer._attn_mods.59.mlp.c_fc.w", "prior.transformer._attn_mods.59.mlp.c_fc.b", "prior.transformer._attn_mods.59.mlp.c_proj.w", "prior.transformer._attn_mods.59.mlp.c_proj.b", "prior.transformer._attn_mods.59.ln_1.weight", "prior.transformer._attn_mods.59.ln_1.bias", "prior.transformer._attn_mods.60.attn.c_attn.w", "prior.transformer._attn_mods.60.attn.c_attn.b", "prior.transformer._attn_mods.60.attn.c_proj.w", "prior.transformer._attn_mods.60.attn.c_proj.b", "prior.transformer._attn_mods.60.ln_0.weight", "prior.transformer._attn_mods.60.ln_0.bias", "prior.transformer._attn_mods.60.mlp.c_fc.w", "prior.transformer._attn_mods.60.mlp.c_fc.b", "prior.transformer._attn_mods.60.mlp.c_proj.w", "prior.transformer._attn_mods.60.mlp.c_proj.b", "prior.transformer._attn_mods.60.ln_1.weight", "prior.transformer._attn_mods.60.ln_1.bias", "prior.transformer._attn_mods.61.attn.c_attn.w", "prior.transformer._attn_mods.61.attn.c_attn.b", "prior.transformer._attn_mods.61.attn.c_proj.w", "prior.transformer._attn_mods.61.attn.c_proj.b", "prior.transformer._attn_mods.61.ln_0.weight", "prior.transformer._attn_mods.61.ln_0.bias", "prior.transformer._attn_mods.61.mlp.c_fc.w", "prior.transformer._attn_mods.61.mlp.c_fc.b", "prior.transformer._attn_mods.61.mlp.c_proj.w", "prior.transformer._attn_mods.61.mlp.c_proj.b", "prior.transformer._attn_mods.61.ln_1.weight", "prior.transformer._attn_mods.61.ln_1.bias", "prior.transformer._attn_mods.62.attn.c_attn.w", "prior.transformer._attn_mods.62.attn.c_attn.b", "prior.transformer._attn_mods.62.attn.c_proj.w", "prior.transformer._attn_mods.62.attn.c_proj.b", "prior.transformer._attn_mods.62.ln_0.weight", "prior.transformer._attn_mods.62.ln_0.bias", "prior.transformer._attn_mods.62.mlp.c_fc.w", "prior.transformer._attn_mods.62.mlp.c_fc.b", "prior.transformer._attn_mods.62.mlp.c_proj.w", "prior.transformer._attn_mods.62.mlp.c_proj.b", "prior.transformer._attn_mods.62.ln_1.weight", "prior.transformer._attn_mods.62.ln_1.bias", "prior.transformer._attn_mods.63.attn.c_attn.w", "prior.transformer._attn_mods.63.attn.c_attn.b", "prior.transformer._attn_mods.63.attn.c_proj.w", "prior.transformer._attn_mods.63.attn.c_proj.b", "prior.transformer._attn_mods.63.ln_0.weight", "prior.transformer._attn_mods.63.ln_0.bias", "prior.transformer._attn_mods.63.mlp.c_fc.w", "prior.transformer._attn_mods.63.mlp.c_fc.b", "prior.transformer._attn_mods.63.mlp.c_proj.w", "prior.transformer._attn_mods.63.mlp.c_proj.b", "prior.transformer._attn_mods.63.ln_1.weight", "prior.transformer._attn_mods.63.ln_1.bias", "prior.transformer._attn_mods.64.attn.c_attn.w", "prior.transformer._attn_mods.64.attn.c_attn.b", "prior.transformer._attn_mods.64.attn.c_proj.w", "prior.transformer._attn_mods.64.attn.c_proj.b", "prior.transformer._attn_mods.64.ln_0.weight", "prior.transformer._attn_mods.64.ln_0.bias", "prior.transformer._attn_mods.64.mlp.c_fc.w", "prior.transformer._attn_mods.64.mlp.c_fc.b", "prior.transformer._attn_mods.64.mlp.c_proj.w", "prior.transformer._attn_mods.64.mlp.c_proj.b", "prior.transformer._attn_mods.64.ln_1.weight", "prior.transformer._attn_mods.64.ln_1.bias", "prior.transformer._attn_mods.65.attn.c_attn.w", "prior.transformer._attn_mods.65.attn.c_attn.b", "prior.transformer._attn_mods.65.attn.c_proj.w", "prior.transformer._attn_mods.65.attn.c_proj.b", "prior.transformer._attn_mods.65.ln_0.weight", "prior.transformer._attn_mods.65.ln_0.bias", "prior.transformer._attn_mods.65.mlp.c_fc.w", "prior.transformer._attn_mods.65.mlp.c_fc.b", "prior.transformer._attn_mods.65.mlp.c_proj.w", "prior.transformer._attn_mods.65.mlp.c_proj.b", "prior.transformer._attn_mods.65.ln_1.weight", "prior.transformer._attn_mods.65.ln_1.bias", "prior.transformer._attn_mods.66.attn.c_attn.w", "prior.transformer._attn_mods.66.attn.c_attn.b", "prior.transformer._attn_mods.66.attn.c_proj.w", "prior.transformer._attn_mods.66.attn.c_proj.b", "prior.transformer._attn_mods.66.ln_0.weight", "prior.transformer._attn_mods.66.ln_0.bias", "prior.transformer._attn_mods.66.mlp.c_fc.w", "prior.transformer._attn_mods.66.mlp.c_fc.b", "prior.transformer._attn_mods.66.mlp.c_proj.w", "prior.transformer._attn_mods.66.mlp.c_proj.b", "prior.transformer._attn_mods.66.ln_1.weight", "prior.transformer._attn_mods.66.ln_1.bias", "prior.transformer._attn_mods.67.attn.c_attn.w", "prior.transformer._attn_mods.67.attn.c_attn.b", "prior.transformer._attn_mods.67.attn.c_proj.w", "prior.transformer._attn_mods.67.attn.c_proj.b", "prior.transformer._attn_mods.67.ln_0.weight", "prior.transformer._attn_mods.67.ln_0.bias", "prior.transformer._attn_mods.67.mlp.c_fc.w", "prior.transformer._attn_mods.67.mlp.c_fc.b", "prior.transformer._attn_mods.67.mlp.c_proj.w", "prior.transformer._attn_mods.67.mlp.c_proj.b", "prior.transformer._attn_mods.67.ln_1.weight", "prior.transformer._attn_mods.67.ln_1.bias", "prior.transformer._attn_mods.68.attn.c_attn.w", "prior.transformer._attn_mods.68.attn.c_attn.b", "prior.transformer._attn_mods.68.attn.c_proj.w", "prior.transformer._attn_mods.68.attn.c_proj.b", "prior.transformer._attn_mods.68.ln_0.weight", "prior.transformer._attn_mods.68.ln_0.bias", "prior.transformer._attn_mods.68.mlp.c_fc.w", "prior.transformer._attn_mods.68.mlp.c_fc.b", "prior.transformer._attn_mods.68.mlp.c_proj.w", "prior.transformer._attn_mods.68.mlp.c_proj.b", "prior.transformer._attn_mods.68.ln_1.weight", "prior.transformer._attn_mods.68.ln_1.bias", "prior.transformer._attn_mods.69.attn.c_attn.w", "prior.transformer._attn_mods.69.attn.c_attn.b", "prior.transformer._attn_mods.69.attn.c_proj.w", "prior.transformer._attn_mods.69.attn.c_proj.b", "prior.transformer._attn_mods.69.ln_0.weight", "prior.transformer._attn_mods.69.ln_0.bias", "prior.transformer._attn_mods.69.mlp.c_fc.w", "prior.transformer._attn_mods.69.mlp.c_fc.b", "prior.transformer._attn_mods.69.mlp.c_proj.w", "prior.transformer._attn_mods.69.mlp.c_proj.b", "prior.transformer._attn_mods.69.ln_1.weight", "prior.transformer._attn_mods.69.ln_1.bias", "prior.transformer._attn_mods.70.attn.c_attn.w", "prior.transformer._attn_mods.70.attn.c_attn.b", "prior.transformer._attn_mods.70.attn.c_proj.w", "prior.transformer._attn_mods.70.attn.c_proj.b", "prior.transformer._attn_mods.70.ln_0.weight", "prior.transformer._attn_mods.70.ln_0.bias", "prior.transformer._attn_mods.70.mlp.c_fc.w", "prior.transformer._attn_mods.70.mlp.c_fc.b", "prior.transformer._attn_mods.70.mlp.c_proj.w", "prior.transformer._attn_mods.70.mlp.c_proj.b", "prior.transformer._attn_mods.70.ln_1.weight", "prior.transformer._attn_mods.70.ln_1.bias", "prior.transformer._attn_mods.71.attn.c_attn.w", "prior.transformer._attn_mods.71.attn.c_attn.b", "prior.transformer._attn_mods.71.attn.c_proj.w", "prior.transformer._attn_mods.71.attn.c_proj.b", "prior.transformer._attn_mods.71.ln_0.weight", "prior.transformer._attn_mods.71.ln_0.bias", "prior.transformer._attn_mods.71.mlp.c_fc.w", "prior.transformer._attn_mods.71.mlp.c_fc.b", "prior.transformer._attn_mods.71.mlp.c_proj.w", "prior.transformer._attn_mods.71.mlp.c_proj.b", "prior.transformer._attn_mods.71.ln_1.weight", "prior.transformer._attn_mods.71.ln_1.bias". 

3_extract.sh not generating outputs

Hi, really appreciate this nice work. I stumbled upon a problem when trying to reproduce the experimental results. After executing 3_extract.sh following the instructions in README, there is nothing inside the representation output folder, say ~/.jukemir/representations/gtzan_ff/jukebox, and the terminal only generated the following texts without indicating errors.

lab812@lab812-Z390-UD:~/jukemir/reproduce$ bash 3_extract.sh 
  0%|                                                                                                                                   | 0/4 [00:00<?, ?it/s]Using cuda True
Downloading from azure
Restored from /root/.cache/jukebox/models/5b/vqvae.pth.tar
0: Loading vqvae in eval mode
lab812@lab812-Z390-UD:

Is it because the hardware does not meet your execution criteria (at least 30GB of RAM and a GPU with at least 12GB)?
Thanks for your reply in advance!

Dockerfile incomplete?

Hi,

I am interested in extracting jukebox representations with my own dataset. I've looked at the readme where you provide a Docker environment and instructions and how to process data. However, I am running in a server cluster where Docker containers are not well supported.

I was thinking of replicating the environment by looking at the source DockerFile, but it seems to be different than the command list in the docker image. Ideally I would like to create a conda environment or similar, and just install python related packages and avoid low level installations.

Do you know if this is possible? Are there any alternatives?

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.