Git Product home page Git Product logo

gcnet's Introduction

GCNet (GIF Caption Network) | Neural Network Generated GIF Captions

The goal of GCNet is to produce high quality GIF captions.

Below are GIFs from the TGIF dataset, and GCNet's generated captions for them. GCNet was not trained with these GIFs!

GIF from TGIF Dataset GCNet Generated GIF Caption
TGIF a monkey with an animal is eating something
TGIF a man is holding a microphone and moving his hands
TGIF a white car is driving down a road
TGIF a soccer player is scoring a goal in a football match
TGIF a dog is trying to catch a toy
TGIF a girl with blonde hair is talking and moving her head
TGIF a young woman in a car is driving and smiles

Architecture

Input

  1. GIF frames' precomputed VGG16 output (TODO: Create Standalone GCNet that doesn't require precomputation)
  2. In-progress GIF caption. This is a subcaption of the full caption (or the to be caption outside of training). See Setup Step 7 - Data Expansion. See Obtaining a Caption with GCNet for more details.

Output

  1. Next word of the in-progress GIF caption. See Obtaining a Caption with GCNet for more details.

Overview

GCNet can be thought of as computing: P(next word in caption | GIF, in-progress caption)

GCNet Architecture Overview

Obtaining a Caption with GCNet

GCNet generates a GIF's caption iteratively, requiring the GIF and its in-progress caption to be run through GCNet the number of times there are words in the caption. This is because GCNet computes the next word given an input GIF and in-progress caption. The first iteration's in-progress caption will consist of empty word indices (all zeros), producing the first word in the caption. This word becomes part of the in-progress caption, and is fed back into GCNet along with the same GIF, producing the second word, and so on... until the in-progress caption is at the caption's max length - 1, producing the last word in the caption. This results in the in-progress caption becoming the final generated caption for the given GIF.

  1. For right now, all input needs to be precomputed. Steps to do this are in Setup.
  2. For right now, once precomputed inputs are produced, use gcnet.test.py by changing the precomputed file references to your own.

(TODO: Standalone GCNet)

Pretrained components

  1. ImageNet Trained VGG16
  2. Stanford's GloVe Word Vectors (840B tokens, 2.2M vocab, 300D)

Setup

From start to finish, this will take at least 6 hours if you have a gigabit internet connection, fast processor, a lot of memory (at least 64GB), and a good GPU :)

0. Requirements

1. Provide Dataset

  1. mkdir data
  2. Place a dataset with the Data Format requirements in ./data/gif-url-captions.tsv

OR, if you don't have your own dataset, you may choose to use this dataset (TGIF, Kaggle), please review their license! Follow the below steps to proceed with this option.

  1. mkdir data
  2. wget https://raw.githubusercontent.com/raingo/TGIF-Release/master/data/tgif-v1.0.tsv -O ./data/gif-url-captions.tsv

2. Download

This will download all GIFs in the above dataset. If using TGIF as your dataset, this is ~120GB. Make sure you have enough room!

  1. mkdir gifs
  2. node download.js
  • If you encounter errors while running this, increase DOWNLOAD_INTERVAL in download.js
  • download.js will also strip out captions into ./captions.txt for further processing

3. Prepare GIFs

This will split, resize, and save the resulting GIF frames such that for every GIF ./gifs/X.gif with N frames, it will create frame PNGs ./gifs/X/X_[0..N].png.

This doubles the size of the data, to ~250GB. If you would like to in place remove processed GIFs (keep size at ~120GB), then you must set removeProcessedGifs = True in prepareGifs.py

  1. python -i prepareGifs.py

4. Clean Captions

This will attempt to normalize the captions by removing unneeded punctuation and expressions, saving them to ./clean.captions.txt

  1. python -i cleanCaptions.py

5. Filter Captions

filterCaptions.py will compute the vocab (and save to vocab.#vocabSize.txt), compute the embedding matrix (and save to embeddingMatrix.#vocabSize.npy), filter out captions that are low quality (default < 90% of words in caption are in vocab), and finally compute vocab indexed captions (and save to dataY.captions.#captionLength.npy)

  1. wget http://nlp.stanford.edu/data/glove.840B.300d.zip -O ./data/glove.840B.300d.zip
  2. unzip ./data/glove.840B.300d.zip -d ./data/glove
  3. python -i filterCaptions.py

GCNet Compute Vocab and Embedding Matrix

GCNet Compute GIF Caption Vectors

6. Precompute GIF frames' VGG16 output

Depending on your GPU, this step can take a while. On a GTX 1080, it takes about 3 hours using default settings (~1.65M images). Saves precomputed VGG16 GIF frames to precomputedVGG16Frames.#gifFrames.npy (~6GB)

  1. python -i precomputeVGG16.py

GCNet Precompute GIF Frame's VGG16 Output

7. Running GCNet

If you changed any of the parameters in either step 5 or 6, then you will need to change the corresponding variables in gcnet.train.py

This will load all precomputed data, build GCNet, expand the data (see figure below), and start training.

  1. python -i gcnet.train.py

Data Expansion

GCNet Expand GIF Captions

There is an error in the above figure, it should include the empty subcaption (for i from 0 to #captionLength - 1):

Prepending these to the example lists:

X: [0, 0, ... 0]

Y: [1]

8. Test Trained GCNet

If you changed any of the parameters in either step 5 or 6, then you will need to change the corresponding variables in gcnet.test.py

  1. Set PRETRAINED_WEIGHTS in gcnet.test.py to the location of your trained weights file
  2. python -i gcnet.test.py
  3. Enjoy!

Data Format

Provide a list of GIF urls and corresponding captions with the following data format:

Each new line will contain:

gif-url gif-caption (such that gif-url and gif-caption are separated by a tab)

For example:

http://doggif.gif	a dog playing catch
http://catgif.gif	a cat walking around

Above are instructions for obtaining a dataset that meets this format. (GCNet does not require the above dataset as long as the aforementioned data format is followed)

Acknowledged Issues

  1. Some GIFs when split into their frames produce artifacts, distortions, or are otherwise not ideal. This is a bug with Pillow. I'm open to suggestions. I looked at many options, and they all seemed to have their pros and cons. I stuck with Pillow because it seemed to be no worse than other options, and is just easy to use with Python.
  2. Using the default 840B GloVe pretrained word vectors uses around 40GB of memory. To circumvent this, you may consider using one of GloVe's smaller pretrained word vectors. Otherwise, it is possible to change how word vectors are loaded, and be more efficient.
  3. GCNet parameters are spread across files. These should be consolidated into a single config.
  4. Sometimes the captions are just flat out wrong. This is typically because of #1. In the table above, those GIF's frames were split cleanly, and as you can tell, their resulting captions are just incredible. Otherwise, see Future Work for details on how I plan to improve GCNet even more.
  5. It's difficult to run a single GIF through GCNet to see what caption is produces. See Standalone GCNet.

Standalone GCNet

TODO: Create standalone gcnet.py that takes a GIF file name as input from the command line and prints out GCNet's generated caption for the GIF. It will also download pretrained GCNet weights. This should allow people to skip all Setup steps.

Future Work

  1. The most obvious improvement I can think of would be passing through the convolutional output from VGG16 (before its fully connected layers) for each frame, in addition to the currently used VGG16 output. This is because VGG16 was only trained with static categories, and may not have specialized in more useful indicators for the purpose of understanding contextual actions.
  2. Curate a larger, more diverse, dataset. TGIF was great for a POC, but it is incredibly small (only ~100k GIFs). I'm looking forward to making a website that will allow people to submit GIFs w/ captions and allow the community to vote on the quality of other's submissions.
  3. Make a website that allows users to upload / link a GIF, return GCNet's generated caption, and then vote on the generated caption's accuracy. This would not only be great for people to sample GCNet, it would also collect amazing data that could be used with reinforcement learning.

WIP

This is a work in progress. If you notice something is wrong, please let me know, and I'll fix it when I get the chance! Thanks!

gcnet's People

Contributors

chcaru avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

msoley bemoregt

gcnet's Issues

I met this error when downloading gif files ...

Hi, @chcaru

I met this error when downloading gif files ...
...
cdox6o1_400.gif ./gifs/9218.gif
https://31.media.tumblr.com/80d18953b8137768c0ba20f49f717923/tumblr_nq65yc1FXJ1uy2hbko1_400.gif ./gifs/9219.gif
https://38.media.tumblr.com/7647292b1aaa43462f9003b5aa18299e/tumblr_nocba7iTH61u3umtco1_500.gif ./gifs/9220.gif
https://33.media.tumblr.com/e6446e77491635574153cae3c631a2be/tumblr_npvv6zt4We1t9pukjo1_250.gif ./gifs/9221.gif
https://33.media.tumblr.com/38bbbd0cb564207d6804c1007443499a/tumblr_noo3eqSx1B1rfk87qo1_400.gif ./gifs/9222.gif
events.js:182
throw er; // Unhandled 'error' event
^

Error: connect ETIMEDOUT 124.108.101.58:443
at Object._errnoException (util.js:1026:11)
at _exceptionWithHostPort (util.js:1049:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)

I tried already increasing the 25 value to 99 in download.js file.
but above error happens.

What's wrong to me?

Thanks at any rate..

I met this error when run "node download.js.

Hi, @chcaru

I met this error when run "node download.js.

sgio2@sgio2:~/gcnet$ node download.js
/home/sgio2/gcnet/download.js:36
let index = 0;
^^^

SyntaxError: Block-scoped declarations (let, const, function, class) not yet supported outside strict mode
at exports.runInThisContext (vm.js:53:16)
at Module._compile (module.js:374:25)
at Object.Module._extensions..js (module.js:417:10)
at Module.load (module.js:344:32)
at Function.Module._load (module.js:301:12)
at Function.Module.runMain (module.js:442:10)
at startup (node.js:136:18)
at node.js:966:3

What's wrong with me ?

I've installed node.js in ubuntu.

Thanks in advance.

Ubuntu 16.04 x64
node v4.2.6

ImportError: cannot import name 'Set'

HI, @chcaru

I met this runtime error when run

kraken:gcnet dti$ python3 -i filterCaptions.py
Traceback (most recent call last):
File "filterCaptions.py", line 4, in
from sets import Set
ImportError: cannot import name 'Set'

What's wrong with me?
Thanks in advance ~

My IDE:
iMac i7, MacOSX Sierra, Python3

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.