Comments (5)
Are you sure that the concatenated (tokenized) input length will substantially exceed 512 for RACE? If it is substantially longer than 512 tokens, you can always fine-tune with a longer input length. T5 is trained with relative position encodings so it works fine to longer-than-512 sequences. For example we tried fine-tuning on MultiRC with 1024 sequence length and saw no gains. Also note that decoding would not be expensive; you would be predicting a single token corresponding to the answer index (e.g. ""A", "B", "C", or "D").
I'm not sure I understand how doing the ranking-based loss/eval would help alleviate any sequence length issue. I also feel that the ranking-based metrics overly complicate things when the basic text-to-text/maximum likelihood framework seems to work well (for example, we achieved SoTA on WNLI/WSC without using a ranking-based loss, as was required by previous work to get better-than-chance accuracy). But, to answer your question, you can use "perplexity_eval" mode to get the perplexity. https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/utils.py#L1697
from text-to-text-transfer-transformer.
Thanks for your answer, that's very helpful. Now I know how to solve RACE :P . But still, I'm wondering how t5 could be used to handle the case where we might have over 100 queries, like in Passage Retrieval
.
from text-to-text-transfer-transformer.
Correct me if I'm wrong, but in the single-query case, isn't passage retrieval loosely equivalent to a span-based QA task? If so, with 100 queries, couldn't you feed in one query at a time along with the document? This would not be the most efficient way to do things but would likely work.
from text-to-text-transfer-transformer.
Since T5 is using relative attention mechanisms, is it possible to use sequence lengths more than 512 with pretrained T5, without fine-tuning it?
The answer for my issue says we can use any sea as the input where the only constraint is the memory.
from text-to-text-transfer-transformer.
Hi, yes, you can use any sequence length you want. Any relative position difference greater than 128 is mapped to the same ("very far away") bucket. We have gone up to 2048 internally.
from text-to-text-transfer-transformer.
Related Issues (20)
- ValueError when evaluating tuning model using Mtf library
- using A100(40G)*8 gpus server to train T5-3b,it reports OOM resource is exhausted problem HOT 2
- How should I speed up T5 exported saved_model by using TF-TRT ?
- model.finetune(...) does not show the loss of the model HOT 6
- CUDA OOM with HF Model
- Predictions are inconsistent unless model is reloaded for each prediction HOT 1
- how to change teacher forcing fashion to autogressive fashion in training stage?
- ERROR:root:Path not found: gs://t5-data/pretrained_models/large/operative_config.gin HOT 6
- Fine tuning t5 without TPU
- About "seqio" in "hf_model.py"
- Question about the metric reported in the paper?
- All attempts to get a Google authentication bearer token failed, returning an empty token. HOT 2
- How to fine-tune T5 with a Casual Language Modeling object?
- cmd vs entrypoint youtube video suggestion HOT 1
- Question about cross-node(multi-node) data parallelism on GPU HOT 1
- Dependencies in `setup.py` have module conflicts.
- How can I get the best checkpoint in Squad?
- Custom Model
- Columns and DataType Not Explicitly Set on line 163 of eval_utils_test.py
- Clarification on T5 Model Pre-training Objective and Denoising Process
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from text-to-text-transfer-transformer.