Git Product home page Git Product logo

contextuallstm's Introduction

contextualLSTM

Contextual LSTM for NLP tasks like word prediction

This repo's goal is to implement de Contextual LSTM model for word prediction as described by [Ghosh, S., Vinyals, O., Strope, B., Roy, S., Dean, T., & Heck, L. (n.d.). Contextual LSTM (CLSTM) models for Large scale NLP tasks. https://doi.org/10.1145/12351]

Notes: there are scripts to run the pipelines. However, the project needs a bit of cleanup. If anyone is interested in using it please write to me or open an issue and I'll fix/help with any error you have.

Data preprocessing and embeddings

Further details about wikipedia data preprocessing at

./documentation/word_embeddings_and_topic_detection.pdf

Context creation with topic detection

Further details of different gensim topic detection methods as well as embeddings arithmetic for context creation at

./documentation/word_embeddings_and_topic_detection_II.pdf

Execution

Download a wikipedia dump for example:

https://dumps.wikimedia.org/enwiki/20180420/enwiki-20180420-pages-articles.xml.bz2

After that use wiki_extractor to process it:

./wiki_extractor_launch.sh path_to_wikipedia_dump

where path_to_wikipedia_dump is the file you downloaded (e.g. enwiki-20180120-pages-articles.xml.bz2)

To run the whole pipeline use the script:

./run_pipeline.sh ../data/enwiki 500

`./preprocess.sh ../data/enwiki 500 2 where:

  • ../data/enwiki is the default path where preprocess script extracted and cleaned the wikipedia dump.
  • 500 is the desired embedding size.

To run just the pipeline with pre-trained embeddings of size 1000 run:

./run_short_pipeline.sh ../data/ 1000

You can download the required trained embeddings from here:

https://www.dropbox.com/s/ws6d8l6h6jp3ldc/embeddings.tar.gz?dl=0

You should place them inside the models/ folder

LSTM

Basic LSTM implementation with TF at ./src/lstm.py

CLSTM

Contextual LSTM implementation with TF at ./src/clstm.py

Although functional, this version is still too slow to be practical for training. If you want to collaborate or have any question regarding it feel free to contact me, I plan to finish it shortly and upload a detailed description of it.

Execution

Most files have their own execution script under /bin folder. All scripts named submit_XXX.sh are designed to be run in a SuperComputer with Slurm queue system. In order to run the locally, just issue the python commands followed by the correct paths.

Note: due to the use of many different packages not all files run with the same Python version (some with 2.7, others with 3.5.2 and the rest 3.6), I expect to unify them (or state clearly the version) soon.

contextuallstm's People

Contributors

kafkasl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

contextuallstm's Issues

run_pipeline with threshold higher than 1 is not supported

The run_pipeline.sh only works if the trained embeddings use all the words present in the dataset (i.e. there is no minimum number of ocurrences per word to create its embedding).
If the threshold is higher than 1 it crashes because the script that substitutes removed words with tags has not been called.

Cleanup repo

The preprocessing for the normal dataset and the one with the context information is a bit different.
They are both intertwined and some cleanup would be nice to remove duplicate code and unify the pipelines.

Error in running ./run_pipeline.sh ./data/enwiki 500

Traceback (most recent call last):
File "preprocess.py", line 4, in
from preprocess.cleaner import clean_data
File "../src/preprocess/cleaner.py", line 2, in
from pattern.en import tokenize
ImportError: No module named pattern.en
Embeddings path: ../models/idWordVec_500.pklz
Traceback (most recent call last):
File "../src/lstm/lstm.py", line 423, in
tf.app.run()
File "/home/ankit/anaconda2/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv))
File "../src/lstm/lstm.py", line 360, in main
embeddings = VectorManager.read_vector(FLAGS.embeddings)
File "../src/utils/vector_manager.py", line 74, in read_vector
with open(filename, 'rb') as f:
IOError: [Errno 2] No such file or directory: '../models/idWordVec_500.pklz'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.