Git Product home page Git Product logo

mustard's Introduction

MUStARD: Multimodal Sarcasm Detection Dataset

Open in Colab

This repository contains the dataset and code for our ACL 2019 paper:

Towards Multimodal Sarcasm Detection (An Obviously Perfect Paper)

We release the MUStARD dataset which is a multimodal video corpus for research in automated sarcasm discovery. The dataset is compiled from popular TV shows including Friends, The Golden Girls, The Big Bang Theory, and Sarcasmaholics Anonymous. MUStARD consists of audiovisual utterances annotated with sarcasm labels. Each utterance is accompanied by its context, which provides additional information on the scenario where the utterance occurs.

Example Instance

Example instance

Example sarcastic utterance from the dataset along with its context and transcript.

Raw Videos

We provide a Google Drive folder with the raw video clips, including both the utterances and their respective context

Data Format

The annotations and transcripts of the audiovisual clips are available at data/sarcasm_data.json. Each instance in the JSON file is allotted one identifier (e.g. "1_60") which is a dictionary of the following items:

Key Value
utterance The text of the target utterance to classify.
speaker Speaker of the target utterance.
context List of utterances (in chronological order) preceding the target utterance.
context_speakers Respective speakers of the context utterances.
sarcasm Binary label for sarcasm tag.

Example format in JSON:

{
  "1_60": {
    "utterance": "It's just a privilege to watch your mind at work.",
    "speaker": "SHELDON",
    "context": [
      "I never would have identified the fingerprints of string theory in the aftermath of the Big Bang.",
      "My apologies. What's your plan?"
    ],
    "context_speakers": [
      "LEONARD",
      "SHELDON"
    ],
    "sarcasm": true
  }
}

Citation

Please cite the following paper if you find this dataset useful in your research:

@inproceedings{mustard,
    title = "Towards Multimodal Sarcasm Detection (An \_Obviously\_ Perfect Paper)",
    author = "Castro, Santiago  and
      Hazarika, Devamanyu  and
      P{\'e}rez-Rosas, Ver{\'o}nica  and
      Zimmermann, Roger  and
      Mihalcea, Rada  and
      Poria, Soujanya",
    booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = "7",
    year = "2019",
    address = "Florence, Italy",
    publisher = "Association for Computational Linguistics",
}

Run the code

  1. Set up the environment with Conda:

    conda env create
    conda activate mustard
    python -c "import nltk; nltk.download('punkt')"
  2. Download Common Crawl pretrained GloVe word vectors of size 300d, 840B tokens somewhere.

  3. Download the pre-extracted visual features to the data/ folder (so data/features/ contains the folders context_final/ and utterances_final/ with the features) or extract the visual features yourself.

  4. Download the pre-extracted BERT features and place the two files directly under the folder data/ (so they are data/bert-output.jsonl and data/bert-output-context.jsonl), or extract the BERT features in another environment with Python 2 and TensorFlow 1.11.0 following "Using BERT to extract fixed feature vectors (like ELMo)" from BERT's repo and running:

    # Download BERT-base uncased in some dir:
    wget https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip
    # Then put the location in this var:
    BERT_BASE_DIR=...
    
    python extract_features.py \
      --input_file=data/bert-input.txt \
      --output_file=data/bert-output.jsonl \
      --vocab_file=${BERT_BASE_DIR}/vocab.txt \
      --bert_config_file=${BERT_BASE_DIR}/bert_config.json \
      --init_checkpoint=${BERT_BASE_DIR}/bert_model.ckpt \
      --layers=-1,-2,-3,-4 \
      --max_seq_length=128 \
      --batch_size=8
  5. Check the options in python train_svm.py -h to select a run configuration (or modify config.py) and then run it:

    python train_svm.py  # add the flags you want
  6. Evaluation: We evaluate using weighted F-score metric in a 5-fold cross validation scheme. The fold indices are available at data/split_incides.p . Refer to our baseline scripts for more details.

mustard's People

Contributors

bryant1410 avatar devamanyu avatar soujanyaporia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mustard's Issues

About the validation set?

It seems that there is no validation set for model optimization.
Though I can find the argument of val_split = 0.1 in the config.py file, I cannot find where it is called to form the validation set in other files.
For the Speaker-dependent setting, I can get the idea of Cross Validation. But for Speaker-independent setting, how would you select the model?

Regarding Feature vector size.

Hi,
I was trying to play around with the data set. But I was confused regarding the shape of the dataset.
I understand that the shape of the feature-vectors for context is
(#samples x #sentences x feature-vector-size)
But on the other hand this was true only for Text and Visual. I was not sure for audio.
Can you please clarify regarding the same.?

Also I would like to know is there anyway I could get the features word wise?
Thank you.

Google Collab Notebook request

Hi,

Please convert the code into a Google Collab notebook and drop a link in the README. Love to play with it but it's not clear what the sizes of dataset and model would be and I have a tiny MBP so would be great to run on Collab.

Cannot reproduce the results

Hi
Thank you for your work and code
I tried to reproduce the results shown in th paper but noticed large degradations of performance among all configs.

For example, I got

 weighted avg      0.574     0.584     0.573       356

for independent T+A

weighted avg      0.602     0.587     0.589       356

for independent T+V

Weighted Precision: 0.483  Weighted Recall: 0.472  Weighted F score: 0.472

for dependent T

Weighted Precision: 0.629 Weighted Recall: 0.626 Weighted F score: 0.626
for dependent T+V

Did I miss anything or could you suggest some training tricks?

Thanks

Extracting 512 feature vector

Hi, I am working with a project that uses your methods for feature extraction on facial features. I am wondering how to extract the 512 resnet features using your model. When I extract features using the process described in the visual folder I get 2048 values.

Kind regards

Visual Feature Extraction

Hi
Thank you for your work and code
Could I get the context_final data by following the Visual Feature Extraction steps?
It seems that I could only get the features/utterances_final hdf5 data.
Did I miss anything in the process?
Thanks.

Audio Extraction/Features file

Is it possible to see the audio extraction python script to fully analyze how it works in detail?

As well, how were you able to reduce the laugh track as per your paper "Then we remove background noise from the signal
by applying a heuristic vocal-extraction method."

the audio of raw data

Why is there no sound in some videos in the data? This means that some data is missing audio.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.