Git Product home page Git Product logo

cacophony's Introduction

Cacophony

Inference codebase for "Cacophony: An Improved Contrastive Audio-Text Model"

Abstract

Despite recent advancements in audio-text modeling, audio-text contrastive models still lag behind their image-text counterparts in scale and performance. We propose a method to improve both the scale and the training of audio-text contrastive models. Specifically, we craft a large-scale audio-text dataset containing 13,000 hours of text-labeled audio, using pretrained language models to process noisy text descriptions and automatic captioning to obtain text descriptions for unlabeled audio samples. We first train on audio-only data with a masked autoencoder (MAE) objective, which allows us to benefit from the scalability of unlabeled audio datasets. We then, initializing our audio encoder from the MAE model, train a contrastive model with an auxiliary captioning objective. Our final model, which we name Cacophony, achieves state-of-the-art performance on audio-text retrieval tasks, and exhibits competitive results on the HEAR benchmark and other downstream tasks such as zero-shot classification.



Requirements

Jax and Flax are used for the model implementation. Tested on RTX 2080Ti, CUDA version 11.5, cuDNN version 8.2.1, cudatoolkit 11.3.1, and Python 3.8.17.

pip install requirements.txt

Pretrained Models

We provide the following pretrained models on both stages of the Cacophony model, folder here.

Stage 1: AudioMAE

Model detail:

  • Filename: AudioMAE.ckpt
  • Audio sampling rate: 16000
  • Audio Encoder: 85.26M
  • Audio Decoder: 85.85M
  • File MD5: 3a8a7778a5e2013ceb4a418e1504d3d8

Stage 2: Cacophony

Model detail:

  • Filename: Cacophony.ckpt
  • Audio sampling rate: 16000
  • Audio Encoder size: 85.26M
  • Text Encoder size: 125.23M
  • Text Decoder size: 76.46M
  • File MD5: bb6aa4b4e8e90ea3595021bf8233add0

Evaluation Results

The evaluation datasets involves HEAR benchmark AudioCaps, Clotho, UrbanSound8K, ESC-50, TUT Acoustic Scene 2017 and VGGSound-test datasets. Since our model is trained on audio sampled at 16kHz, we first downsample all of the audio from the above datasets to match with the training stage.

1. Audio-Text Retrieval

We evaluate the model performance Audio-Text retrieval task using the AudioCaps dataset and Clotho dataset.

python eval_caco.py --task ar --model_path <path_to_model>

Reproducible results for the Audio-Text retrieval task are as follows:

Text-to-Audio Audio-to-Text
R@1 R@5 R@10 R@1 R@5 R@10
AudioCaps 0.410 0.753 0.864 0.553 0.836 0.924
Clotho 0.202 0.459 0.588 0.265 0.541 0.762

2. Zero-Shot Classification

We evaluate the model performance on the zero-shot classification task using the UrbanSound8K, ESC-50,TUT Acoustic Scene 2017 and VGGSound-test datasets. Note that, upon our evaluation, we found that some of the audio from the VGGSound-test dataset is not publicly available anymore, and we were unable to evaluate our model on the full dataset. Instead we evaluate on 12,722 samples.

python eval_caco.py --task zs --model_path <path_to_model>
ESC-50 UrbanSound8K TUTAS2017 VGGSound-test
0.934 0.771 0.486 0.271

3. Audio Captioning

python eval_caco.py --task caption --model_path <path_to_model>

4. HEAR Benchmark

Our environment does not support the HEAR benchmark, but we provide the code to run the benchmark in the hear directory. To successfully run the benchmark, follow the instructions in the hear directory.

Please check the run_hear_eval.sh for details, example cmd:

bash run_hear_eval.sh /path/to/AudioMAE.ckpt /path/to/embedding /path/to/hear ./tasklist/hear_all_tasks.txt 0 16000

*HEAR Benchmark Results

To complement the radar chart in the paper, we present the accuracy numbers for the HEAR benchmark alongside those of other baseline models including LAION-CLAP, MS-CLAP, WavCaps-CNN14, and WavCaps-HTSAT.

Model ESC50 Libri
Count
CREMAD Gunshot SC 5hr SC Full Vox
Lingua
Vocal
Imitation
NSynth
Pitch
5hr
NSynth
Pitch
50hr
GTZAN
Genre
GTZAN
Music
Speech
Beijing
Opera
Percussion
LAION-CLAP-fusion 0.964 0.625 0.566 0.914 0.693 0.758 0.264 0.155 0172 0.376 0.842 0.962 0.962
LAION-CLAP 0.971 0.659 0.557 0.845 0.693 0.774 0.189 0.151 0.180 0.423 0.838 0.969 0.953
MS-CLAP 0.930 0.649 0.547 0.798 0.511 0.626 0.236 0.106 0.112 0.274 0.818 0.992 0.932
WavCaps-CNN14 0.962 0.646 0.556 0.789 0.583 0.640 0.270 0.158 0.140 0.324 0.861 0.992 0.957
WavCaps-HTSAT 0.961 0.690 0.595 0.929 0.752 0.806 0.234 0.168 0.256 0.548 0.847 0.962 0.958
Stage1: AudioMAE (Ours) 0.870 0.778 0.697 0.940 0.886 0.922 0.488 0.179 0.720 0.842 0.838 0.969 0.953
Stage2: Cacophony (Ours) 0.970 0.660 0.593 0.833 0.680 0.762 0.262 0.191 0.420 0.726 0.850 0.985 0.970

Acknowledgements

We are immensely grateful to the Google TPU Research Cloud (TRC) for generously providing the computational resources vital to our project. Their support has been invaluable.

We thank the FreeSound team from Pompeu Fabra University for providing us with the scraping API. We thank the University of Rochester Goergen Institute for Data Science (GIDS) seed funding program. We thank LAION CLAP team for collecting open source datesets and generously sharing them with the research community.

References

cacophony's People

Contributors

gzhu06 avatar

Stargazers

Xiquan Li avatar Huu Tuong Tu avatar HAESUNG JEON (chad.plus) avatar  avatar p0p avatar  avatar  avatar Jing-Yi Li avatar  avatar Nickolay V. Shmyrev avatar Alef Iury avatar Amantur Amatov avatar Henry Ives avatar yearnyeen ho avatar Xingjian Du avatar Davide Gabrielli avatar Yiming Li avatar Baotong Tian avatar  avatar Heinrich Dinkel avatar Dimitrios Bralios avatar

Watchers

 avatar Nickolay V. Shmyrev avatar  avatar Amantur Amatov avatar

cacophony's Issues

HEAR results

Hey guys,
Thanks for the good work. I am just curious about the results on HEAR. Would it be possible for you to provide your results as a Table of some sort such that one can directly "copy"/cite the results?
Currently, Figure 6 is a bit unclear in that regards since I can only "guess" the values of Cacophony.
Kind regards,
Heinrich

Pre-trained models

Hi, great to see the exciting results in your paper! I see that you mention sharing the pre-trained models for both the AudioMAE and Cacophony, but it looks like the links are currently just placeholders. Any ETA on those weights?

Thanks for all the great work, and let me know if there's any way I can lend a hand!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.