Git Product home page Git Product logo

countnet's Introduction

Speaker Count Estimation using Deep Neural Networks

screen shot 2017-11-21 at 12 35 28

CountNet is a deep learning model to estimate the number of concurrent speakers from single channel mixtures is a very challenging task that is a mandatory first step to address any realistic “cocktail-party” scenario. It has various audio-based applications such as blind source separation, speaker diarisation, and audio surveillance.

This repo provides pre-trained models.

Publications

2019: IEEE/ACM Transactions on Audio, Speech, and Language Processing

  • Title: CountNet: Estimating the Number of Concurrent Speakers Using Supervised Learning Speaker Count Estimation
  • Authors: Fabian-Robert Stöter, Soumitro Chakrabarty, Bernd Edler, Emanuël A. P. Habets
  • Preprint: HAL
  • Proceedings: IEEE (paywall)

2018: ICASSP

  • Title: Classification vs. Regression in Supervised Learning for Single Channel Speaker Count Estimation
  • Authors: Fabian-Robert Stöter, Soumitro Chakrabarty, Bernd Edler, Emanuël A. P. Habets
  • Preprint: arXiv 1712.04555
  • Proceedings: IEEE (paywall)

Demos

A demo video is provided on the accompanying website.

Usage

This repository provides the keras model to be used from Python to infer count estimates. The preprocessing dependes on librosa and scikit-learn. Not that the provided model is trained on 16 kHz samples of 5 seconds duration.

Docker

Docker makes it easy to reproduce the results and install all requirements. If you have docker installed, run the following steps to predict a count from the provided test sample.

  • Build the docker image: docker build -t countnet .
  • Predict from example: docker run -i countnet python predict.py --model CRNN examples/5_speakers.wav

Manual Installation

To install the requirements using Anaconda Python, run

conda env create -f env.yml

You can now run the command line script and process wav files using the pre-trained model CRNN (best peformance).

python predict.py examples/5_speakers.wav --model CRNN

Reproduce Paper Results using the LibriCount Dataset

DOI

The full test dataset is available for download on Zenodo.

LibriCount10 0dB Dataset

The dataset contains a simulated cocktail party environment of [0..10] speakers, mixed with 0dB SNR from random utterances of different speakers from the LibriSpeech CleanTest dataset.

For each recording we provide the ground truth number of speakers within the file name, where k in, k_uniquefile.wav is the maximum number of concurrent speakers with the 5 seconds of recording.

All recordings are of 5s durations. For each unique recording, we provide the audio wave file (16bits, 16kHz, mono) and an annotation json file with the same name as the recording.

Metadata

In the annotation file we provide information about the speakers sex, their unique speaker_id, and vocal activity within the mixture recording in samples. Note that these were automatically generated using a voice activity detection method.

In the following example a speaker count of 3 speakers is the ground truth.

[
	{
		"sex": "F", 
		"activity": [[0, 51076], [51396, 55400], [56681, 80000]], 
		"speaker_id": 1221
	}, 
	{
		"sex": "F", 
		"activity": [[0, 51877], [56201, 80000]], 
		"speaker_id": 3570
	}, 
	{
		"sex": "M", 
		"activity": [[0, 15681], [16161, 68213], [73498, 80000]], 
		"speaker_id": 5105
	}
]

Running evaluation

python eval.py ~/path/to/LibriCount10-0dB --model CRNN outputs the mean absolute error per class and averaged.

Pretrained models

Name Number of Parameters MAE on test set
RNN 0.31M 0.38
F-CRNN 0.06M 0.36
CRNN 0.35M 0.27

FAQ

Is it possible to convert the model to run on a modern version of keras with tensorflow backend?

Yes, its possible. But I was unable to get identical results when converting model. I tried this guide but it still didn't help to get to the same performance compared to keras 1.2.2 and theano.

License

MIT

countnet's People

Contributors

faroit avatar jonashaag avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

countnet's Issues

couldn't load CRNN

Hi there,
when i run this:

python predict.py examples/5_speakers.wav --model CRNN

returns:

Traceback (most recent call last):
File "predict.py", line 69, in
'exp': K.exp
File "/usr/local/lib/python3.7/dist-packages/keras/saving/save.py", line 202, in load_model
compile)
File "/usr/local/lib/python3.7/dist-packages/keras/saving/hdf5_format.py", line 181, in load_model_from_hdf5
custom_objects=custom_objects)
File "/usr/local/lib/python3.7/dist-packages/keras/saving/model_config.py", line 59, in model_from_config
return deserialize(config, custom_objects=custom_objects)
File "/usr/local/lib/python3.7/dist-packages/keras/layers/serialization.py", line 163, in deserialize
printable_module_name='layer')
File "/usr/local/lib/python3.7/dist-packages/keras/utils/generic_utils.py", line 672, in deserialize_keras_object
list(custom_objects.items())))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/sequential.py", line 493, in from_config
custom_objects=custom_objects)
File "/usr/local/lib/python3.7/dist-packages/keras/layers/serialization.py", line 163, in deserialize
printable_module_name='layer')
File "/usr/local/lib/python3.7/dist-packages/keras/utils/generic_utils.py", line 675, in deserialize_keras_object
deserialized_obj = cls.from_config(cls_config)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py", line 716, in from_config
return cls(**config)
File "/usr/local/lib/python3.7/dist-packages/keras/layers/convolutional.py", line 2859, in init
super(ZeroPadding2D, self).init(**kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/tracking/base.py", line 522, in _method_wrapper
result = method(self, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py", line 323, in init
generic_utils.validate_kwargs(kwargs, allowed_kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/generic_utils.py", line 1134, in validate_kwargs
raise TypeError(error_message, kwarg)
TypeError: ('Keyword argument not understood:', 'input_dtype')

Would you show me the problem?

ImportError: nvcuda.dll

I'm getting the following error:

ImportError: Could not find 'nvcuda.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable. Typically it is installed in 'C:\Windows\System32'. If it is not present, ensure that you have a CUDA-capable GPU with the correct driver installed.

Any idea how to resolve this?
many thanks!

__init__() missing 1 required positional argument: 'output_dim'

model = keras.models.load_model(os.path.join('models', 'RNN_keras2.h5'))

File "predict_audio.py", line 24, in
os.path.join('models', 'RNN_keras2.h5')
File "C:\Users\mj\Anaconda3\lib\site-packages\keras\models.py", line 142, in load_model
model = model_from_config(model_config, custom_objects=custom_objects)
File "C:\Users\mj\Anaconda3\lib\site-packages\keras\models.py", line 193, in model_from_config
return layer_from_config(config, custom_objects=custom_objects)
File "C:\Users\mj\Anaconda3\lib\site-packages\keras\utils\layer_utils.py", line 42, in layer_from_config
return layer_class.from_config(config['config'])
File "C:\Users\mj\Anaconda3\lib\site-packages\keras\models.py", line 1085, in from_config
layer = get_or_create_layer(first_layer)
File "C:\Users\mj\Anaconda3\lib\site-packages\keras\models.py", line 1069, in get_or_create_layer
layer = layer_from_config(layer_data)
File "C:\Users\mj\Anaconda3\lib\site-packages\keras\utils\layer_utils.py", line 42, in layer_from_config
return layer_class.from_config(config['config'])
File "C:\Users\mj\Anaconda3\lib\site-packages\keras\layers\wrappers.py", line 41, in from_config
layer = layer_from_config(config.pop('layer'))
File "C:\Users\mj\Anaconda3\lib\site-packages\keras\utils\layer_utils.py", line 42, in layer_from_config
return layer_class.from_config(config['config'])
File "C:\Users\mj\Anaconda3\lib\site-packages\keras\engine\topology.py", line 1025, in from_config
return cls(**config)
TypeError: init() missing 1 required positional argument: 'output_dim'

Is there a way to use this model for inference on tracks that are less than 5 seconds?

Actually I am trying to prepare a real-time or near real-time system, which would select out a primary speaker from a track containing concurrent overlapping speakers, and this would further be the input to an ASR model. However, this particular pretrained model seems to only work for tracks that are 5 seconds or more in duration. Is there a way to overcome this?
Would I have to train this model again? Will it hurt the MAE scores?

RuntimeWarning for `tensorflow.python.framework.fast_tensor_util` version mismatch

could the RuntimeWarning & UserWarning below be fixed?

root@bj:~/tf_notebook/CountNet# docker run -v /data:/data -i countnet python predict_audio.py /data/2.wav_trim.wav
2018-02-01 10:59:43.933026: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA
Using TensorFlow backend.
/usr/local/lib/python3.6/importlib/_bootstrap.py:205: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
  return f(*args, **kwds)
/usr/local/lib/python3.6/site-packages/keras/models.py:252: UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.
  warnings.warn('No training configuration found in save file: '
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
bidirectional_1 (Bidirection (None, 500, 60)           55680
_________________________________________________________________
bidirectional_2 (Bidirection (None, 500, 40)           12960
_________________________________________________________________
bidirectional_3 (Bidirection (None, 500, 80)           25920
_________________________________________________________________
maxpooling1d_1 (MaxPooling1D (None, 250, 80)           0
_________________________________________________________________
flatten_1 (Flatten)          (None, 20000)             0
_________________________________________________________________
dense_1 (Dense)              (None, 11)                220011
_________________________________________________________________
activation_1 (Activation)    (None, 11)                0
=================================================================
Total params: 314,571
Trainable params: 314,571
Non-trainable params: 0
_________________________________________________________________
Speaker Count Estimate:  3

Dockerfile & requirements.txt modification

  • Dockerfile should provide a VOLUME for testing wav files outside the container.
  • protobuf-3.5.0 in requirements.txt is not available now, it becomes protobuf-3.5.0.post1
diff --git a/Dockerfile b/Dockerfile
index 2a75803..3c1098f 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -8,3 +8,5 @@ WORKDIR /app

 RUN pip install --upgrade pip && \
     pip install -r requirements.txt
+
+VOLUME /data
diff --git a/requirements.txt b/requirements.txt
index 9c68fb4..85b1cce 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -11,7 +11,7 @@ llvmlite==0.20.0
 Markdown==2.6.9
 numba==0.35.0
 numpy==1.13.3
-protobuf==3.5.0
+protobuf>=3.5.0
 PyYAML==3.12
 resampy==0.2.0
 scikit-learn==0.19.1

docker run fails

 $ docker run -i countnet python predict.py --model CRNN examples/5_speakers.wav
Unable to find image 'countnet:latest' locally
docker: Error response from daemon: pull access denied for countnet, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.

Is the training data available?

Hey! I really love your work and I'm wondering whether you can provide the training data you synthesized from LibirSpeech clean-360 dataset? That would help a lot!

Is there any source code?

Hi there.
I found your work recently. I'm interested in source counting in speech and choose it for my bachelor project. I'm trying to improve it with better feature engineering with some new stuffs.

Until now i succeed to load CNN.h5 but it has many parameters that i couldn't train it well! Other attractive models like CRNN.h5, have problem in loading that i mention below:
TypeError: ('Keyword argument not understood:', 'input_dtype')

I would be grateful if you would let me to have source code for my work to improve CountNet.

Sincerely.

Email: Ahmad Mahmoodian Darvishani

Fix docker build

currently docker build is broken due to the missing requirements.txt

Is there any code about how to train the model?

Hello,
Thanks for your paper and code for speaker count estimation and it helps me a lot. Is there any code about how to train the model while there is only the code of prediction? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.