Git Product home page Git Product logo

speech-emotion-classification-with-pytorch's Introduction

Speech-Emotion-Classification-with-PyTorch

This repository contains PyTorch implementation of 4 different models for classification of emotions of the speech:

  1. Stacked Time Distributed 2D CNN - LSTM
  2. Stacked Time Distributed 2D CNN - Bidirectional LSTM with attention
  3. Parallel 2D CNN - Bidirectional LSTM with attention
  4. Parallel 2D CNN - Transformer Encoder

DATASET

Models are trained on RAVDESS Emotional Speech Audio dataset. It consits of 1440 speech audio-only files (16 bits, 48kHz, .wav).
Dataset is balanced:
dataset1
Emotions have 2 intensities: strong and normal (except for the neutral emotion, which only has normal intensity).
dataset2

PREPROCESSING

Signals are loaded with sample rate of 48kHz and cut off to be in the range of [0.5, 3] seconds. If the signal is shorter than 3s it is padded with zeros.
MEL spectrogram is calculated and used as an input for the models (for the 1st and 2nd model the spectrogram is splitted into 7 chunks).
Example of the MEL spectrogram:
spectrogram
Dataset is splitted into train, validation and test sets, with following percentage: (80,10,10)%.
Data augmentation is performed by adding Additive White Gaussian Noise (with SNR in range [15,30]) on the original signal. This enormously improved accuracy and removed overfitting.
Datasets are scaled with Standard Scaler.

MODELS

Architectures for all 4 models are shown from left to right respectively:

spectrogram

RESULTS

1. Model:
Accuracy: 94.02%

Confusion Matrix Influence of Emotion intensity on correctness
KM1 EI1

2. Model:
Accuracy: 96.55%

Confusion Matrix Influence of Emotion intensity on correctness
KM2 EI2

3. Model:
Accuracy: 95.40%

Confusion Matrix Influence of Emotion intensity on correctness
KM3 EI3

4. Model:
Accuracy: 96.78%

Confusion Matrix Influence of Emotion intensity on correctness
KM4 EI4

speech-emotion-classification-with-pytorch's People

Contributors

kosta-jo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

speech-emotion-classification-with-pytorch's Issues

Seem like a bug

Hello,your models is pretty,but i find there something wierd at

mel_train = []
print("Calculatin mel spectrograms for train set")
for i in range(X_train.shape[0]):
    mel_spectrogram = getMELspectrogram(X_train[0,:], sample_rate=SAMPLE_RATE)
    mel_train.append(mel_spectrogram)
    print("\r Processed {}/{} files".format(i,X_train.shape[0]),end='')
print('')
del X_train

mel_val = []
print("Calculatin mel spectrograms for val set")
for i in range(X_val.shape[0]):
    mel_spectrogram = getMELspectrogram(X_val[0,:], sample_rate=SAMPLE_RATE)
    mel_val.append(mel_spectrogram)
    print("\r Processed {}/{} files".format(i,X_val.shape[0]),end='')
print('')
del X_val

mel_test = []
print("Calculatin mel spectrograms for test set")
for i in range(X_test.shape[0]):
    mel_spectrogram = getMELspectrogram(X_test[0,:], sample_rate=SAMPLE_RATE)
    mel_test.append(mel_spectrogram)
    print("\r Processed {}/{} files".format(i,X_test.shape[0]),end='')
print('')
del X_test

It always load the first data,I think it may be the index i

Hyperparameter setting

Thank you for sharing your very impressive performance and model.
I have one question for you.

I want to reproduce the same performance as you showed below, but how do I set the hyperparameter?

  1. Stacked Time Distributed 2D CNN - LSTM (94.02%)
  2. Stacked Time Distributed 2D CNN - Bidirectional LSTM with attention (96.55%)
  3. Parallel 2D CNN - Bidirectional LSTM with attention (95.40%)
  4. Parallel 2D CNN - Transformer Encoder (96.78%)

Pytorch Dataloaders are used, Now how to instantiate the model training as before?

Hi sir,
As suggested by you I have used Pytorch Dataloaders due to cuda memory issues. Now plz guide me with code How to train test and validate the model to replicate your output.
I have tried the code below.

train_set = Dataset(X=X_train, y=Y_train,mode="train")
tr_loader = DL(train_set, batch_size=8, num_workers=0,shuffle=True)

test_set  = Dataset(X=X_test, y=Y_test,mode="train")
ts_loader = DL(test_set, batch_size=8,num_workers=0,shuffle=False)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print('Selected device is {}'.format(device))
model = HybridModel(num_emotions=len(EMOTIONS)).to(device)
print('Number of trainable params: ',sum(p.numel() for p in model.parameters()) )
criterion=nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=0.01, weight_decay=1e-3, momentum=0.8)
#%%
train_step = make_train_step(model, loss_fnc, optimizer=optimizer)
validate = make_validate_fnc(model,loss_fnc)

#%%
verbose=True
Losses = []
Accuracies = []
epochs=50
DLS = {"train": tr_loader, "valid": ts_loader}

start_time = time.time()
for e in range(epochs):
    epochLoss = {"train": 0, "valid": 0}
    epochAccs = {"train": 0, "valid": 0}

    for phase in ["train", "valid"]:
        if phase == "train":
            model.train()
        else:
            model.eval()

        lossPerPass = []
        accuracy = []

        for X, y in DLS[phase]:
            X, y = X.to(device), y.to(device).view(-1)

            optimizer.zero_grad()
            alpha=1.0
            beta=1.0
            with torch.set_grad_enabled(phase == "train"):
              
                pred_emo, output_softmax= model(X)
                emotion_loss = criterion(pred_emo,y)
              
                total_loss = alpha*emotion_loss
                if phase == "train":
                    total_loss.backward()
                    optimizer.step()
            lossPerPass.append(total_loss.item())
            accuracy.append(accuracy_score(torch.argmax(torch.exp(output_softmax.detach().cpu()), dim=1), y.cpu()))
            torch.save(model.state_dict(),"E................................/Epoch_{}.pt".format(e+1))  
        epochLoss[phase] = np.mean(np.array(lossPerPass))
        epochAccs[phase] = np.mean(np.array(accuracy))
        # Epoch Checkpoint // All or Best
    Losses.append(epochLoss)
    Accuracies.append(epochAccs)

    

    if verbose:
        print("Epoch : {} | Train Loss : {:.5f} | Valid Loss : {:.5f} \
| Train Accuracy : {:.5f} | Valid Accuracy : {:.5f}".format(e + 1, epochLoss["train"], epochLoss["valid"],
                                                        epochAccs["train"], epochAccs["valid"]))

Is it okay. I am using a batch size of 8. But the model is performing very poor in this training , for 50 epochs the accuracy is fluctuating between 25 to 30%. I mean for how much epochs you run it?

Cuda out of memory for batch_size=1

Hi there, Just started reusing your first model, stacked _cnn , all work fine until training.
Even for batch_size=1
its throwing an error: cuda out of memory.

is there any other solution for this? I think you have not used any dataloaders here probably that's the reason , plz explain and guide what to do?

RuntimeError: CUDA out of memory

When training the model, the issue below happens:

`Selected device is cuda

RuntimeError Traceback (most recent call last)
in
4 device = 'cuda' if torch.cuda.is_available() else 'cpu'
5 print('Selected device is {}'.format(device))
----> 6 model = HybridModel(num_emotions=len(EMOTIONS)).to(device)
7 print('Number of trainable params: ',sum(p.numel() for p in model.parameters()))
8 OPTIMIZER = torch.optim.SGD(model.parameters(),lr=0.01, weight_decay=1e-3, momentum=0.8)

~\anaconda3\lib\site-packages\torch\nn\modules\module.py in to(self, *args, **kwargs)
671 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
672
--> 673 return self._apply(convert)
674
675 def register_backward_hook(

~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _apply(self, fn)
385 def _apply(self, fn):
386 for module in self.children():
--> 387 module._apply(fn)
388
389 def compute_should_use_set_data(tensor, tensor_applied):

~\anaconda3\lib\site-packages\torch\nn\modules\rnn.py in _apply(self, fn)
177
178 def _apply(self, fn):
--> 179 ret = super(RNNBase, self)._apply(fn)
180
181 # Resets _flat_weights

~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _apply(self, fn)
407 # with torch.no_grad():
408 with torch.no_grad():
--> 409 param_applied = fn(param)
410 should_use_set_data = compute_should_use_set_data(param, param_applied)
411 if should_use_set_data:

~\anaconda3\lib\site-packages\torch\nn\modules\module.py in convert(t)
669 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None,
670 non_blocking, memory_format=convert_to_format)
--> 671 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
672
673 return self._apply(convert)

RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 2.00 GiB total capacity; 1.08 GiB already allocated; 1.44 MiB free; 1.11 GiB reserved in total by PyTorch)`

Decreasing the batch size doesn't help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.