Git Product home page Git Product logo

Comments (4)

JohnGiorgi avatar JohnGiorgi commented on May 26, 2024

Hi @ydennisy,

Good question. How long on average are the documents you want to process? Under the hood, DeCLUTR is using RoBERTa-base, so it can handle text up to 512 wordpieces long. This should be enough to embed full paragraphs.

There are two solutions if your text is longer than this:

  1. Consider breaking it up into chunks of 512, embedding each chunk, and then taking the average.
  2. Re-train the method using an encoder that can handle longer text, like Longformer. I can maybe do this for you. Would you have a way of evaluting the models performance? SentEval won't tell us how we are doing on document length text.

from declutr.

ydennisy avatar ydennisy commented on May 26, 2024

@JohnGiorgi thank you for your reply!

I had not seen Longformer before so thanks for bringing that to my attention.

We are currently trying to figure out a nice objective metric for various downstream tasks. Once we have this I can share for sure - but I should not have any troubles training.

I think what I did not grasp from the paper is how your model interacts with the models it is based on (Roberta / Longformer).

from declutr.

JohnGiorgi avatar JohnGiorgi commented on May 26, 2024

I had not seen Longformer before so thanks for bringing that to my attention.

No problem!

We are currently trying to figure out a nice objective metric for various downstream tasks. Once we have this I can share for sure - but I should not have any troubles training.

Great! I would be very interested in metrics for evaluating long-document embedding.

I think what I did not grasp from the paper is how your model interacts with the models it is based on (Roberta / Longformer).

We take a pre-trained language model, like RoBERTa, and extend its training for 1-3 epochs. In this extended phase, the model is trained with masked language modelling (MLM, just like the original pre-training) and our contrastive objective. Our contrastive objective encourages the model to assign similar embeddings to spans of text that are nearby in the same document. Once trained, we use our model (which has an identical architecture to RoBERTa) to embed sentences and evaluate using SentEval.

The way the codebase is set up, you can apply our training to almost any pretrained MLM model from https://huggingface.co/models.

from declutr.

JohnGiorgi avatar JohnGiorgi commented on May 26, 2024

Closing this for now. Please re-open if you have other questions!

from declutr.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.