Git Product home page Git Product logo

Comments (5)

craffel avatar craffel commented on June 17, 2024

Are you sure that the concatenated (tokenized) input length will substantially exceed 512 for RACE? If it is substantially longer than 512 tokens, you can always fine-tune with a longer input length. T5 is trained with relative position encodings so it works fine to longer-than-512 sequences. For example we tried fine-tuning on MultiRC with 1024 sequence length and saw no gains. Also note that decoding would not be expensive; you would be predicting a single token corresponding to the answer index (e.g. ""A", "B", "C", or "D").

I'm not sure I understand how doing the ranking-based loss/eval would help alleviate any sequence length issue. I also feel that the ranking-based metrics overly complicate things when the basic text-to-text/maximum likelihood framework seems to work well (for example, we achieved SoTA on WNLI/WSC without using a ranking-based loss, as was required by previous work to get better-than-chance accuracy). But, to answer your question, you can use "perplexity_eval" mode to get the perplexity. https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/utils.py#L1697

from text-to-text-transfer-transformer.

desperadoola avatar desperadoola commented on June 17, 2024

Thanks for your answer, that's very helpful. Now I know how to solve RACE :P . But still, I'm wondering how t5 could be used to handle the case where we might have over 100 queries, like in Passage Retrieval.

from text-to-text-transfer-transformer.

craffel avatar craffel commented on June 17, 2024

Correct me if I'm wrong, but in the single-query case, isn't passage retrieval loosely equivalent to a span-based QA task? If so, with 100 queries, couldn't you feed in one query at a time along with the document? This would not be the most efficient way to do things but would likely work.

from text-to-text-transfer-transformer.

shamanez avatar shamanez commented on June 17, 2024

@craffel

Since T5 is using relative attention mechanisms, is it possible to use sequence lengths more than 512 with pretrained T5, without fine-tuning it?

The answer for my issue says we can use any sea as the input where the only constraint is the memory.

from text-to-text-transfer-transformer.

craffel avatar craffel commented on June 17, 2024

Hi, yes, you can use any sequence length you want. Any relative position difference greater than 128 is mapped to the same ("very far away") bucket. We have gone up to 2048 internally.

from text-to-text-transfer-transformer.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.