Git Product home page Git Product logo

lexvec's Introduction

LexVec

This is an implementation of the LexVec word embedding model (similar to word2vec and GloVe) that achieves state of the art results in multiple NLP tasks, as described in this paper and this one.

Pre-trained Vectors

Evaluation

In-memory, large corpus

Model GSem GSyn MSR RW SimLex SCWS WS-Sim WS-Rel MEN MTurk
LexVec, Word 81.1% 68.7% 63.7% .489 .384 .652 .727 .619 .759 .655
LexVec, Word + Context 79.3% 62.6% 56.4% .476 .362 .629 .734 .663 .772 .649
word2vec Skip-gram 78.5% 66.1% 56.0% .471 .347 .649 .774 .647 .759 .687
  • All three models were trained using the same English Wikipedia 2015 + NewsCrawl corpus.

  • GSem, GSyn, and MSR analogies were solved using 3CosMul.

  • LexVec was trained using the default parameters, expanded here for comparison:

    $ ./lexvec -corpus enwiki+newscrawl.txt -output lexvecvectors -dim 300 -window 2 \
    -subsample 1e-5 -negative 5 -iterations 5 -minfreq 100 -matrix ppmi -model 0
    
  • word2vec Skip-gram was trained using:

    $ ./word2vec -train enwiki+newscrawl.txt -output sgnsvectors -size 300 -window 10 \
    -sample 1e-5 -negative 5 -hs 0 -binary 0 -cbow 0 -iter 5 -min-count 100
    

External memory, huge corpus

Model GSem GSyn MSR RW SimLex SCWS WS-Sim WS-Rel MEN MTurk
LexVec, Word 76.4% 71.3% 70.6% .508 .444 .667 .762 .668 .802 .716
LexVec, Word + Context 80.4% 66.6% 65.1% .496 .419 .644 .775 .702 .813 .712
word2vec 73.3% 75.1% 75.1% .515 .436 .655 .741 .610 .699 .680
GloVe 81.8% 72.4% 74.3% .384 .374 .540 .698 .571 .743 .645
  • All models use vectors with 300 dimensions.

  • GSem, GSyn, and MSR analogies were solved using 3CosMul.

  • LexVec was trained using this release of Common Crawl which contains 58B tokens, restricting the vocabulary to the 2 million most frequent words, using the following command:

    $ OUTPUTDIR=output ./external_memory_lexvec.sh -corpus common_crawl.txt -negative 3 \
    -model 0 -maxvocab 2000000 -minfreq 0 -window 2                                             
    
  • The pre-trained word2vec vectors were trained using the unreleased Google News corpus containing 100B tokens, restricting the vocabulary to the 3 million most frequent words.

  • The pre-trained GloVe vectors were trained using Common Crawl (release unknown) containing 42B tokens, restricting the vocabulary to the 1.9 million most frequent words.

Installation

Binary

The easiest way to get started with LexVec is to download the binary release. We only distribute amd64 binaries for Linux.

Download binary

If you are using Windows, OS X, 32-bit Linux, or any other OS, follow the instructions below to build from source.

Building from source

  1. Install the Go compiler

  2. Make sure your $GOPATH is set

  3. Execute the following commands in your terminal:

    $ go get github.com/alexandres/lexvec
    $ cd $GOPATH/src/github.com/alexandres/lexvec
    $ go build

Usage

In-memory (default, faster, more accurate)

To get started, run $ ./demo.sh which trains a model using the small text8 corpus (100MB from Wikipedia).

Basic usage of LexVec is:

$ ./lexvec -corpus somecorpus -output someoutputdirectory/vectors

Run $ ./lexvec -h for a full list of options.

Additionally, we provide a word2vec script which implements the exact same interface as the word2vec package should you want to test LexVec using existing scripts.

External Memory

By default, LexVec stores the sparse matrix being factorized in-memory. This can be a problem if your training corpus is large and your system memory limited. We suggest you first try using the in-memory implementation, which achieves higher scores in evaluations. If you run into Out-Of-Memory issues, try this External Memory approximation (not as accurate as in-memory; read paper for details).

$ env OUTPUTDIR=output ./external_memory_lexvec.sh -corpus somecorpus -dim 300 ...exactsameoptionsasinmemory

Pre-processing can be accelerated by installing nsort and pypy and editing pairs_to_counts.sh.

References

Salle, Alexandre, Marco Idiart, and Aline Villavicencio. Matrix Factorization using Window Sampling and Negative Sampling for Improved Word Representations. The 54th Annual Meeting of the Association for Computational Linguistics. 2016.

Salle, A., Idiart, M., & Villavicencio, A. (2016). Enhancing the LexVec Distributed Word Representation Model Using Positional Contexts and External Memory. arXiv preprint arXiv:1606.01283.

License

Copyright (c) 2016 Salle, Alexandre [email protected]. All work in this package is distributed under the MIT License.

lexvec's People

Contributors

alexandres avatar

Watchers

 avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.