Git Product home page Git Product logo

speech-emotion-recognition-using-self-attention's Introduction

speech-emotion-recognition-using-self-attention

In this project we implement the paper "Improved End-to-End Speech Emotion Recognition Using Self Attention Mechanism and Multitask Learning" Published in INTERSPEECH 2019.

Please note that, some of the hyperparameters are changed in order to make the convergence better

NOTE: RESULTS ARE NOT AS GOOD AS THE PAPER. USE IT AT YOUR OWN RISK First create data_collected_full.pickle file by using the following code with respective paths

python mocap_data_collect.py

Creating 5 fold cross validation

First we need to create a pickle file for every combination of 5-fold cross validation. Use dataset.py and change the paths accordingly

python dataset.py

Training

This step trains CNN-LSTM model

python train_audio_only.py

The average accuracy is about ~53% (UW) and 52% (WA) for CNN-BLSTM. The paper reports 55%(WA) and 51% (UW) using CNN-BILSTM-ATTENTION model.

WORK IN PROGRESS Few preprocessing scripts are taken from https://github.com/Samarth-Tripathi/IEMOCAP-Emotion-Detection

Paper link : https://www.isca-speech.org/archive/Interspeech_2019/pdfs/2594.pdf

speech-emotion-recognition-using-self-attention's People

Contributors

krishnadn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

speech-emotion-recognition-using-self-attention's Issues

How to run multi-task learning model ?

Hi,
Thank you for sharing your code!
I read the README file but it only contains the procedure for training audio only model.
I wonder if you can kindly share the procedure for training multi-task model with gender classification ?
Thank you.

the result about the code

hi, I cannot get the same results as the paper,too. And there is a overfitting. As the train WA is over 80%, but the test WA is about 50%. Do you have the same proplem? finaly, I want ask your result about the WA and UA?

about the results

Hello, I want to ask if you got the same results with mentioned in this paper, I try my best but can't get the same results. Attached file is some detail about my code. I want to know is it something wrong with my code.Thanks
code.txt

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.