Comments (4)
Hi @ydennisy,
Good question. How long on average are the documents you want to process? Under the hood, DeCLUTR is using RoBERTa-base, so it can handle text up to 512 wordpieces long. This should be enough to embed full paragraphs.
There are two solutions if your text is longer than this:
- Consider breaking it up into chunks of 512, embedding each chunk, and then taking the average.
- Re-train the method using an encoder that can handle longer text, like Longformer. I can maybe do this for you. Would you have a way of evaluting the models performance? SentEval won't tell us how we are doing on document length text.
from declutr.
@JohnGiorgi thank you for your reply!
I had not seen Longformer
before so thanks for bringing that to my attention.
We are currently trying to figure out a nice objective metric for various downstream tasks. Once we have this I can share for sure - but I should not have any troubles training.
I think what I did not grasp from the paper is how your model interacts with the models it is based on (Roberta / Longformer).
from declutr.
I had not seen Longformer before so thanks for bringing that to my attention.
No problem!
We are currently trying to figure out a nice objective metric for various downstream tasks. Once we have this I can share for sure - but I should not have any troubles training.
Great! I would be very interested in metrics for evaluating long-document embedding.
I think what I did not grasp from the paper is how your model interacts with the models it is based on (Roberta / Longformer).
We take a pre-trained language model, like RoBERTa, and extend its training for 1-3 epochs. In this extended phase, the model is trained with masked language modelling (MLM, just like the original pre-training) and our contrastive objective. Our contrastive objective encourages the model to assign similar embeddings to spans of text that are nearby in the same document. Once trained, we use our model (which has an identical architecture to RoBERTa) to embed sentences and evaluate using SentEval.
The way the codebase is set up, you can apply our training to almost any pretrained MLM model from https://huggingface.co/models.
from declutr.
Closing this for now. Please re-open if you have other questions!
from declutr.
Related Issues (20)
- Saving the model in hugging face format is not working
- Does the training notebook not work in windows jupyter notebook HOT 1
- Cant set up DECLUTR in local AWS linux machine HOT 2
- argument 'lazy' for dataset_reader HOT 2
- Superclass initialization in token embedder HOT 2
- Could not lex the character code 194 HOT 3
- Minimum text length violated despite preprocessing HOT 2
- How to plot the learning curve from the output logs created post training of declutr? HOT 1
- Impact of "shorter" documents (span, number of tokens) for extended pretraining HOT 7
- Installation issue HOT 8
- Wrong training procedure? HOT 6
- Strange issue occuring during Training HOT 2
- load pretrained tf1 model with pytorch HOT 5
- How to integrate a longer sequence model like longformer into declutr architecture HOT 8
- Encoder class breaks for long strings
- can i finetune the model ? HOT 2
- Update DeCLUTR requirements? HOT 5
- How to use a validation dataset when training? HOT 8
- RuntimeError: Error(s) in loading state_dict for DeCLUTR: HOT 2
- Error while encoding HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from declutr.