Git Product home page Git Product logo

riccorl / transformers-embedder Goto Github PK

View Code? Open in Web Editor NEW
34.0 3.0 5.0 927 KB

A Word Level Transformer layer based on PyTorch and 🤗 Transformers.

Home Page: https://riccorl.github.io/transformers-embedder

Python 100.00%
nlp bert transformer transformers allennlp huggingface huggingface-transformers tokenizer natural-language-processing python pytorch deep-learning transformer-embedder embeddings sentences hidden-states preprocess pretrained-models transformers-embedder language-model

transformers-embedder's Introduction

Transformers Embedder

Open in Visual Studio Code PyTorch Transformers Code style: black

Upload to PyPi Upload to PyPi PyPi Version Anaconda-Server Badge DeepSource

A Word Level Transformer layer based on PyTorch and 🤗 Transformers.

How to use

Install the library from PyPI:

pip install transformers-embedder

or from Conda:

conda install -c riccorl transformers-embedder

It offers a PyTorch layer and a tokenizer that support almost every pretrained model from Huggingface 🤗Transformers library. Here is a quick example:

import transformers_embedder as tre

tokenizer = tre.Tokenizer("bert-base-cased")

model = tre.TransformersEmbedder(
    "bert-base-cased", subword_pooling_strategy="sparse", layer_pooling_strategy="mean"
)

example = "This is a sample sentence"
inputs = tokenizer(example, return_tensors=True)
{
   'input_ids': tensor([[ 101, 1188, 1110, 170, 6876, 5650,  102]]),
   'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1]]),
   'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0]])
   'scatter_offsets': tensor([[0, 1, 2, 3, 4, 5, 6]]),
   'sparse_offsets': {
        'sparse_indices': tensor(
            [
                [0, 0, 0, 0, 0, 0, 0],
                [0, 1, 2, 3, 4, 5, 6],
                [0, 1, 2, 3, 4, 5, 6]
            ]
        ), 
        'sparse_values': tensor([1., 1., 1., 1., 1., 1., 1.]), 
        'sparse_size': torch.Size([1, 7, 7])
    },
   'sentence_length': 7  # with special tokens included
}
outputs = model(**inputs)
# outputs.word_embeddings.shape[1:-1]       # remove [CLS] and [SEP]
torch.Size([1, 5, 768])
# len(example)
5

Info

One of the annoyance of using transformer-based models is that it is not trivial to compute word embeddings from the sub-token embeddings they output. With this API it's as easy as using 🤗Transformers to get word-level embeddings from theoretically every transformer model it supports.

Model

Subword Pooling Strategy

The TransformersEmbedder class offers 3 ways to get the embeddings:

  • subword_pooling_strategy="sparse": computes the mean of the embeddings of the sub-tokens of each word (i.e. the embeddings of the sub-tokens are pooled together) using a sparse matrix multiplication. This strategy is the default one.
  • subword_pooling_strategy="scatter": computes the mean of the embeddings of the sub-tokens of each word using a scatter-gather operation. It is not deterministic, but it works with ONNX export.
  • subword_pooling_strategy="none": returns the raw output of the transformer model without sub-token pooling.

Here a little feature table:

Pooling Deterministic ONNX
Sparse
Scatter
None

Layer Pooling Strategy

There are also multiple type of outputs you can get using layer_pooling_strategy parameter:

  • layer_pooling_strategy="last": returns the last hidden state of the transformer model
  • layer_pooling_strategy="concat": returns the concatenation of the selected output_layers of the
    transformer model
  • layer_pooling_strategy="sum": returns the sum of the selected output_layers of the transformer model
  • layer_pooling_strategy="mean": returns the average of the selected output_layers of the transformer model
  • layer_pooling_strategy="scalar_mix": returns the output of a parameterised scalar mixture layer of the selected output_layers of the transformer model

If you also want all the outputs from the HuggingFace model, you can set return_all=True to get them.

class TransformersEmbedder(torch.nn.Module):
    def __init__(
        self,
        model: Union[str, tr.PreTrainedModel],
        subword_pooling_strategy: str = "sparse",
        layer_pooling_strategy: str = "last",
        output_layers: Tuple[int] = (-4, -3, -2, -1),
        fine_tune: bool = True,
        return_all: bool = True,
    )

Tokenizer

The Tokenizer class provides the tokenize method to preprocess the input for the TransformersEmbedder layer. You can pass raw sentences, pre-tokenized sentences and sentences in batch. It will preprocess them returning a dictionary with the inputs for the model. By passing return_tensors=True it will return the inputs as torch.Tensor.

By default, if you pass text (or batch) as strings, it uses the HuggingFace tokenizer to tokenize them.

text = "This is a sample sentence"
tokenizer(text)

text = ["This is a sample sentence", "This is another sample sentence"]
tokenizer(text)

You can pass a pre-tokenized sentence (or batch of sentences) by setting is_split_into_words=True

text = ["This", "is", "a", "sample", "sentence"]
tokenizer(text, is_split_into_words=True)

text = [
    ["This", "is", "a", "sample", "sentence", "1"],
    ["This", "is", "sample", "sentence", "2"],
]
tokenizer(text, is_split_into_words=True)

Examples

First, initialize the tokenizer

import transformers_embedder as tre

tokenizer = tre.Tokenizer("bert-base-cased")
  • You can pass a single sentence as a string:
text = "This is a sample sentence"
tokenizer(text)
{
{
    'input_ids': [[101, 1188, 1110, 170, 6876, 5650, 102]],
    'token_type_ids': [[0, 0, 0, 0, 0, 0, 0]],
    'attention_mask': [[1, 1, 1, 1, 1, 1, 1]],
    'scatter_offsets': [[0, 1, 2, 3, 4, 5, 6]],
    'sparse_offsets': {
        'sparse_indices': tensor(
            [
                [0, 0, 0, 0, 0, 0, 0],
                [0, 1, 2, 3, 4, 5, 6],
                [0, 1, 2, 3, 4, 5, 6]
            ]
        ),
        'sparse_values': tensor([1., 1., 1., 1., 1., 1., 1.]),
        'sparse_size': torch.Size([1, 7, 7])
    },
    'sentence_lengths': [7],
}
  • A sentence pair
text = "This is a sample sentence A"
text_pair = "This is a sample sentence B"
tokenizer(text, text_pair)
{
    'input_ids': [[101, 1188, 1110, 170, 6876, 5650, 138, 102, 1188, 1110, 170, 6876, 5650, 139, 102]],
    'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]],
    'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]],
    'scatter_offsets': [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]],
    'sparse_offsets': {
        'sparse_indices': tensor(
            [
                [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,  0],
                [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
                [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
            ]
        ),
        'sparse_values': tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]),
        'sparse_size': torch.Size([1, 15, 15])
    },
    'sentence_lengths': [15],
}
  • A batch of sentences or sentence pairs. Using padding=True and return_tensors=True, the tokenizer returns the text ready for the model
batch = [
    ["This", "is", "a", "sample", "sentence", "1"],
    ["This", "is", "sample", "sentence", "2"],
    ["This", "is", "a", "sample", "sentence", "3"],
    # ...
    ["This", "is", "a", "sample", "sentence", "n", "for", "batch"],
]
tokenizer(batch, padding=True, return_tensors=True)

batch_pair = [
    ["This", "is", "a", "sample", "sentence", "pair", "1"],
    ["This", "is", "sample", "sentence", "pair", "2"],
    ["This", "is", "a", "sample", "sentence", "pair", "3"],
    # ...
    ["This", "is", "a", "sample", "sentence", "pair", "n", "for", "batch"],
]
tokenizer(batch, batch_pair, padding=True, return_tensors=True)

Custom fields

It is possible to add custom fields to the model input and tell the tokenizer how to pad them using add_padding_ops. Start by initializing the tokenizer with the model name:

import transformers_embedder as tre

tokenizer = tre.Tokenizer("bert-base-cased")

Then add the custom fields to it:

custom_fields = {
  "custom_filed_1": [
    [0, 0, 0, 0, 1, 0, 0],
    [0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0]
  ]
}

Now we can add the padding logic for our custom field custom_filed_1. add_padding_ops method takes in input

  • key: name of the field in the tokenizer input
  • value: value to use for padding
  • length: length to pad. It can be an int, or two string value, subword in which the element is padded to match the length of the subwords, and word where the element is padded relative to the length of the batch after the merge of the subwords.
tokenizer.add_padding_ops("custom_filed_1", 0, "word")

Finally, we can tokenize the input with the custom field:

text = [
    "This is a sample sentence",
    "This is another example sentence just make it longer, with a comma too!"
]

tokenizer(text, padding=True, return_tensors=True, additional_inputs=custom_fields)

The inputs are ready for the model, including the custom filed.

>>> inputs

{
    'input_ids': tensor(
        [
            [ 101, 1188, 1110, 170, 6876, 5650, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
            [ 101, 1188, 1110, 1330, 1859, 5650, 1198, 1294, 1122, 2039, 117, 1114, 170, 3254, 1918, 1315, 106, 102]
        ]
    ),
    'token_type_ids': tensor(
        [
            [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
            [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        ]
    ), 
    'attention_mask': tensor(
        [
            [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
            [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
        ]
    ),
    'scatter_offsets': tensor(
        [
            [ 0, 1, 2, 3, 4, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1],
            [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 13, 14, 15, 16]
        ]
    ),
    'sparse_offsets': {
        'sparse_indices': tensor(
            [
                [ 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,  1],
                [ 0, 1, 2, 3, 4, 5, 6, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 13, 14, 15, 16],
                [ 0, 1, 2, 3, 4, 5, 6, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]
            ]
        ),
        'sparse_values': tensor(
            [1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
            1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
            1.0000, 1.0000, 0.5000, 0.5000, 1.0000, 1.0000, 1.0000]
        ), 
        'sparse_size': torch.Size([2, 17, 18])
    }
    'sentence_lengths': [7, 17],
}

Acknowledgements

Some code in the TransformersEmbedder class is taken from the PyTorch Scatter library. The pretrained models and the core of the tokenizer is from 🤗 Transformers.

transformers-embedder's People

Contributors

andreim14 avatar deepsource-autofix[bot] avatar deepsourcebot avatar dependabot-preview[bot] avatar dependabot[bot] avatar flegyas avatar leonardoemili avatar riccorl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

transformers-embedder's Issues

Faster indices computation for build_offsets

Feature proposal

When building the word_ids mask for text pairs, we can look for the last index of the BPE tokens in the first sentence and update the second part accordingly. As for now, the implementation is a bit slow due to the use of for loops. It can be performed more efficiently if we vectorize the function so that it looks for the offsets batch-wise (e.g. I used NumPy for the purpose, but I'm confident it can be implemented with PyTorch too).

Code snippet

from transformers import AutoTokenizer
import numpy as np

model_name = "bert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)

sents = [
    ("Today I'm using BERT embeddings.", "What's the wheather like in London?"),
    ("My Name is Luke.", "What's your name?")
]

# Tokenizer the text pairs
inputs = tokenizer(sents, return_special_tokens_mask=True)

# Select the offset indices 
idxs = np.argwhere(np.diff(np.concatenate(inputs.special_tokens_mask)) == 1)[::2].squeeze()

# Obtain the batch word_ids
word_ids = np.concatenate([inputs.word_ids(i) for i in range(len(inputs.input_ids))])

offsets = word_ids[idxs].astype(int)
print(offsets)

Detailed explanation

# [SEP] and [CLS] are encoded as `1s` with the special_token_mask
>>> inputs.special_tokens_mask
[
   [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
   [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1]
]

# Look for differences of contiguous elements as to find the offset:
>>> np.diff(np.concatenate(inputs.special_tokens_mask))
array([-1,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  1, -1,  0,  0,    <- "1s" at indexes: 13, 24, 31, 38
        0,  0,  0,  0,  0,  0,  0,  1,  0, -1,  0,  0,  0,  0,  1, -1,  0,
        0,  0,  0,  0,  1])

# Alternate sequence of [SEP], [CLS], [SEP], [CLS], ... select [SEP]s only (i.e. the even indices)

>>> idxs = np.argwhere(np.diff(np.concatenate(inputs.special_tokens_mask)) == 1)[::2].squeeze()
array([13, 31])   <- offset indices (i.e. lengths of the first text in a text pair when concatenated)
                       i.e. BPE lengths are: 13 - 0 = 13 for the first sentence and 31 - 26 for the second

# Get the word_ids, unfortunately the transformers library doesn't provide an attribute as for the `special_tokens_mask`
>>> word_ids = np.concatenate([inputs[i].word_ids for i in range(len(inputs.input_ids))])
>>> word_ids
array([None, 0, 1, 2, 3, 4, 5, 5, 5, 6, 6, 6, 6, 7, None, 0, 1, 2, 3, 4,      <- offsets are: 7 and 4
      4, 5, 6, 7, 8, None, None, 0, 1, 2, 3, 4, None, 0, 1, 2, 3, 4, 5,
      None], dtype=object)

# Select the sentence offsets:
>>> word_ids[idxs].astype(int)
array([7, 4])

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.