Git Product home page Git Product logo

speechclip's People

Contributors

atosystem avatar shampoowang avatar vectominist avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

speechclip's Issues

derive embeddings: cascasded models

I'm able to use example.py for inference with the base parralel flickr model but I get the following error when I use the cascaded models intead i.e. model_fp = "slt_ckpts/SpeechCLIP/base/flickr/cascaded/epoch_58-step_6902-val_recall_mean_1_7.7700.ckpt" or model_fp = "slt_ckpts/SpeechCLIP/large/flickr/cascaded/epoch_187-step_21995-val_recall_mean_10_62.7700.ckpt" or model_fp = "slt_ckpts/SpeechCLIP/large/coco/cascaded/epoch_12-step_28794-val_recall_mean_10_36.1455.ckpt"

Traceback (most recent call last):
File "/work/07469/lpugalen/ls6/SpeechCLIP/example.py", line 61, in
speechFeatVector_baseFlickrCascasdedModel= baseFlickrCascasdedModel.encode_speech(wav=wav_data)#["cascaded_audio_feat"]
File "/work/07469/lpugalen/ls6/SpeechCLIP/avssl/model/kwClip.py", line 1340, inencode_speech
cascaded_audio_feat, vq_results, keywords = self.cascaded_branch(
File "/work/07469/lpugalen/ls6/SpeechCLIP/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/work/07469/lpugalen/ls6/SpeechCLIP/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/work/07469/lpugalen/ls6/SpeechCLIP/avssl/model/kwClip.py", line 914, in forward
audio_feat = self.clip.encode_keywords(keywords, self.keyword_num)
File "/work/07469/lpugalen/ls6/SpeechCLIP/avssl/module/clip_official.py", line 249, in encode_keywords
x = self.model.token_embedding(text)
File "/work/07469/lpugalen/ls6/SpeechCLIP/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/work/07469/lpugalen/ls6/SpeechCLIP/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/work/07469/lpugalen/ls6/SpeechCLIP/torch/nn/modules/sparse.py", line 163, in forward
return F.embedding(
File "/work/07469/lpugalen/ls6/SpeechCLIP/torch/nn/functional.py", line 2237, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)

Dataset source?

Outstanding job! I just can't seem to find the link to the dataset in the cited paper

Simple Embeddings

Hi,

Please could you provide a simple way to load a model and test a single audio clip to produce an embedding?

Thank you very much.

Training on Flickr Dataset Unexpectedly Hangs

Hello,

First of all, thank you very much for this work and your efforts! The repository and guidelines are succinct and pretty effective!

I've encountered a recurring issue while training the large parallel model on the Flickr dataset. The training process unexpectedly hangs - no updates appear in the terminal or the wandb logs. This occurred at approximately 2.7k steps during the first run and around 32k steps in the second. The Conda environment I am using has Python3.10 set, and I was running the experiments on 4 A5000 GPUs.

Currently, I am resuming training from the latest checkpoint by using the resume flag in the training script as a workaround, whenever the training process halts.

I am curious if this is a known issue. Are there components in the code that might cause such behavior, particularly with my setup? Additionally, is resuming training a recommended approach, or are there other flags/settings I should consider?

Any insights or suggestions you can provide would be greatly appreciated.

Thank you!

Can derive embeddings with base but not large

In example.py, I get the following error when I substitute model_fp = "slt_ckpts/SpeechCLIP/base/flickr/parallel/epoch_131-step_15443-val_recall_mean_1_36.0100.ckpt"
with model_fp = "slt_ckpts/SpeechCLIP/large/flickr/parallel/epoch_56-step_6668-val_recall_mean_10_89.0000.ckpt"

I also get the same error for model_fp = "slt_ckpts/SpeechCLIP/large/coco/parallel/epoch_14-step_33224-val_recall_mean_10_84.0128.ckpt"

Traceback (most recent call last):
File "/work/07469/lpugalen/ls6/SpeechCLIP/example.py", line 37, in
largeFlickrParallelModel = avssl.model.KWClip_GeneralTransformer.load_from_checkpoint(largeFlickrParallelModelPath).to(device)
File "/work/07469/lpugalen/ls6/SpeechCLIP/pytorch_lightning/core/saving.py", line 156, in load_from_checkpoint
model = cls._load_model_state(checkpoint, strict=strict, **kwargs)
File "/work/07469/lpugalen/ls6/SpeechCLIP/pytorch_lightning/core/saving.py", line 204, in _load_model_state
keys = model.load_state_dict(checkpoint["state_dict"], strict=strict)
File "/work/07469/lpugalen/ls6/SpeechCLIP/torch/nn/modules/module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for KWClip_GeneralTransformer:
size mismatch for criterion.eye_mat: copying a param with shape torch.Size([1024, 1024]) from checkpoint, the shape in current model is torch.Size([256, 256]).
size mismatch for criterion.neg_eye_mat: copying a param with shape torch.Size([1024, 1024]) from checkpoint, the shape in current model is torch.Size([256, 256]).
size mismatch for criterion.eye_mat_fl: copying a param with shape torch.Size([1024, 1024]) from checkpoint, the shape in current model is torch.Size([256,256]).

about speech-text implement

Hi, I'm trying to reproduce the result in your paper.However, I could not find the implement for speech-text and text-speech. Could you share this part of code ?
I also try to implement it by myself, but I have some problems.

  1. In function forward_text it calls original2Reduced twice when inference. one in forward_text and another in prep_text
  2. I try to prompt text 'turn on', but it has 'key error'. Does it mean the token is not in the reduced_embedding? How to solve this problem

about training codes

非常感谢你的这个具有开创性和启发性的工作。我希望可以follow and replay your work.
尤其是关于如何去train parallelSpeechCLIP的部分。但是我并没有在repository中找到。

请问,能不能大概提供一下training codes on how to call the proposed speech encoder and image encoder of CLIP,之后又是如何计算contrastive loss, followed by how to backward propagation.
或者提供一些reference codes/blogs about training codes.

非常感谢!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.