Git Product home page Git Product logo

nlcodec's Introduction

NLCodec

image Travis (.com)

📕 Docs: https://isi-nlp.github.io/nlcodec

A set of (low level) Natural Language Encoder-Decoders (codecs), that are useful in preprocessing stage of NLP pipeline. These codecs include encoding of sequences into one of the following:

  1. Character
  2. Word
  3. BPE based subword
  4. Class

It provides python (so embed into your app) and CLI APIs (use it as stand alone tool).

There are many BPE implementations available already, but this one provides differs:

  1. Pure python implementation that is easy to modify anything to try new ideas. (other implementations require c++/rust expertise to modify the core)
  2. An easily shareable and inspectable model file. It is a simple text that can be inspected with less or cut. It includes info on which pieces were put together and what frequencies etc.
  3. Reasonably faster than the other pure python implementations. Under the hood tries, doubly linked lists, max-heaps, hash maps etc data-structures to boost performance.
  4. PySpark backend for extracting term frequencies from large datasets.

Installation

Please run only one of these

# Install from pypi (preferred)
$ pip install nlcodec --ignore-installed 

# Clone repo for development mode 
git clone https://github.com/isi-nlp/nlcodec
cd nlcodec
pip install --editable . 

pip installer registers these CLI tools in your PATH:

  • nlcodec -- CLI for learn, encode, decode. Same as python -m nlcodec
  • nlcodec-learn -- CLI for learn BPE, with PySpark backend. Same as python -m nlcodec.learn
  • nlcodec-db -- CLI for bitextdb. python -m nlcodec.bitextdb
  • nlcodec-freq -- CLI for extracting word and char frequencies using spark backend.

Docs are available at

Citation

Refer to https://arxiv.org/abs/2104.00290 To-appear: ACL 2021 Demos

@article{DBLP:journals/corr/abs-2104-00290,
  author    = {Thamme Gowda and
               Zhao Zhang and
               Chris A. Mattmann and
               Jonathan May},
  title     = {Many-to-English Machine Translation Tools, Data, and Pretrained Models},
  journal   = {CoRR},
  volume    = {abs/2104.00290},
  year      = {2021},
  url       = {https://arxiv.org/abs/2104.00290},
  archivePrefix = {arXiv},
  eprint    = {2104.00290},
  timestamp = {Mon, 12 Apr 2021 16:14:56 +0200},
  biburl    = {https://dblp.org/rec/journals/corr/abs-2104-00290.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Authors

nlcodec's People

Contributors

ljferrer avatar thammegowda avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

nlcodec's Issues

Fit or shrink an existing vocab to a given dataset

  1. Accept a list of files
  2. compute term frequencies
  3. Eliminate types with zero counts
  4. Preserve reserved types even if they have zero counts
  5. Save the resulting model at a given file path
  6. Return index mapping between old and new, so we can go back to model and shrink embedding tables

Signature should be something like : scheme.fit(files:List[Path], min_freq:int=1, save_at:Path=None) -> List[int]

Also, for the future work, think about adding a new set of types to vocab. Model can insert a few rows with random weights or average of remaining rows.

Speed up encoding using multiprocessing

Often we will have to encode a huge dataset (say training data for NMT) in one row.
Encoding can be trivially parallelized using multiprocessing, with the help of a pool of worker processes.

Support byte fallback in BPE

  • Instead of UNK-ing OOV characters, support bytefallback
  • character coverage (e..g. 99.95%) param can be used to decide what portion of characters (1-0.9995) to do bytefallback

BPE: dont merge categories

Keep certain characters separate; don't merge them even if there is sufficient frequency

  1. digits
  2. punctuations
  3. dates months years
  4. ... anything else?

watch out: be language agnostic. use Unicode table to figure out digit/punch annotation

Requires pyspark but doesn't install it

nlcodec doesn't install pyspark but requires it. Installing pyspark separately worked.

  File "/home1/jain593/.conda/envs/nmt_toolkits_rtg/lib/python3.7/site-packages/nlcodec/term_freq.py", line 10, in <module>
    from nlcodec import spark as spark_util
  File "/home1/jain593/.conda/envs/nmt_toolkits_rtg/lib/python3.7/site-packages/nlcodec/spark.py", line 12, in <module>
    from pyspark.sql import DataFrame, SparkSession
ModuleNotFoundError: No module named 'pyspark'

Bug: nlcodec CLI is broken

Traceback (most recent call last):
  File "/Users/tg/miniconda3/envs/rtg/bin/nlcodec", line 5, in <module>
    from nlcodec.__main__ import mainnlcodec
ImportError: cannot import name 'mainnlcodec' from 'nlcodec.__main__' (/Users/tg/miniconda3/envs/rtg/lib/python3.7/site-packages/nlcodec/__main__.py)

Caused by: missing a comma

nlcodec/setup.py

Lines 38 to 39 in b4fcc78

'nlcodec=nlcodec.__main__:main'
'nlcodec-freq=nlcodec.term_freq:main'

Classmethod object is not callable

One would think that BPEScheme.name == "bpe" should return resolve as True given

class BPEScheme(CharScheme):
...
    @property
    @classmethod
    def name(cls):
        return "bpe"

but...

Traceback (most recent call last):
  File "/Users/ljferrer/miniconda3/envs/rtg/lib/python3.7/multiprocessing/pool.py", line 121, in worker
    result = (True, func(*args, **kwds))
  File "/Users/ljferrer/miniconda3/envs/rtg/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
    return list(map(*args))
  File "/Users/ljferrer/Documents/PycharmProjects/rtg/rtg/data/dataset.py", line 215, in __call__
    record = [tokr(col) for col, tokr in zip(record, self.tokenizers)]
  File "/Users/ljferrer/Documents/PycharmProjects/rtg/rtg/data/dataset.py", line 215, in <listcomp>
    record = [tokr(col) for col, tokr in zip(record, self.tokenizers)]
  File "/Users/ljferrer/Documents/PycharmProjects/rtg/rtg/data/codec.py", line 171, in encode_as_ids
    if self.codec.name == "bpe" and split_ratio > 0:
TypeError: 'classmethod' object is not callable

add byte encoding scheme

Currently, char, word, bpe, class schemes are supported

TODO: add byte scheme;
char scheme is Unicode code point, whereas the byte scheme is UTF-8 byte.

Challenges:

  • Can this work as 1byte integers? Can sequences be put into a byte array?
  • It'd be fairly easy to represent using 2byte ints, but if we can make it work by using only 1byte ints while also keeping extra 4 special types ( such as <s> </s> <cls> <pad>), then it'd be genius!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.