Git Product home page Git Product logo

videoqa's Introduction

VideoQA

This is the implementation of our paper "Video Question Answering via Gradually Refined Attention over Appearance and Motion".

Datasets

For our experiments, we create two VideoQA datasets named MSVD-QA and MSRVTT-QA. Both datasets are based on existing video description datasets. The QA pairs are generated from descriptions using this tool with additional processing steps. The corresponding videos can be found in base datasets which are MSVD and MSR-VTT. For MSVD-QA, youtube_mapping.txt may be needed to build the mapping of video names. The followings are some examples from the datasets.

MSVD-QA MSRVTT-QA

Models

We propose a model with gradually refined attention over appearance and motion in the video to tackle the VideoQA task. The architecture is presented below. Besides, we also compare the proposed model with three baseline models. Details can be found in the paper. model

Code

The code is written in pure python. Tensorflow is chosen to be the deep learning library here. The code uses two implementations of feature extraction networks which are VGG16 and C3D from the community.

Environments

  • Ubuntu 14.04
  • Python 3.6.0
  • Tensorflow 1.3.0

Prerequisits

  1. Clone the repository to your local machine.

    $ git clone https://github.com/xudejing/VideoQA.git
    
  2. Download the VGG16 checkpoint and C3D checkpoint provided in corresponding repositories, then put them in directory util; Download the word embeddings trained over 6B tokens (glove.6B.zip) from GloVe and put the 300d file in directory util.

  3. Install the python dependency packages.

    $ pip install -r requirements.txt
    

Usage

The directory model contains definition of four models. config.py is the place to define the parameters of models and training process.

  1. Preprocess the VideoQA datasets, for example:

    $ python preprocess_msvdqa.py {dataset location}
    
  2. Train, validate and test the models, for example:

    $ python run_gra.py --mode train --gpu 0 --log log/evqa --dataset msvd_qa --config 0
    

    (Note: you can pass -h to get help.)

  3. Visualize the training process using tensorboard, for example:

    $ tensorboard --logdir log --port 8888
    

Citation

If you find this code useful, please cite the following paper:

@inproceedings{xu2017video,
  title={Video Question Answering via Gradually Refined Attention over Appearance and Motion},
  author={Xu, Dejing and Zhao, Zhou and Xiao, Jun and Wu, Fei and Zhang, Hanwang and He, Xiangnan and Zhuang, Yueting},
  booktitle={ACM Multimedia}
  year={2017}
}

videoqa's People

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.